Jan 23 18:00:55 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 18:00:55 crc restorecon[4759]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:55 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:56 crc restorecon[4759]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:00:56 crc restorecon[4759]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 18:00:57 crc kubenswrapper[4760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:00:57 crc kubenswrapper[4760]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 18:00:57 crc kubenswrapper[4760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:00:57 crc kubenswrapper[4760]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:00:57 crc kubenswrapper[4760]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 18:00:57 crc kubenswrapper[4760]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.311334 4760 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314776 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314797 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314801 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314805 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314810 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314814 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314817 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314821 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314825 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314829 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314834 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314839 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314843 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314848 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314853 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314858 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314863 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314867 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314871 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314875 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314881 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314886 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314891 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314894 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314898 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314902 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314905 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314909 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314913 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314916 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314920 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314923 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314927 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314931 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314935 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314938 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314942 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314946 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314949 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314953 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314956 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314960 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314964 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314967 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314970 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314974 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314978 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314981 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314985 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314989 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314992 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.314996 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315000 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315004 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315008 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315011 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315015 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315019 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315023 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315027 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315030 4760 feature_gate.go:330] unrecognized feature gate: Example Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315034 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315037 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315041 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315044 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315048 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315052 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315056 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315059 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315064 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.315068 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315300 4760 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315311 4760 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315321 4760 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315327 4760 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315332 4760 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315337 4760 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315342 4760 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315349 4760 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315353 4760 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315357 4760 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315361 4760 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315366 4760 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315371 4760 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315377 4760 flags.go:64] FLAG: --cgroup-root="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315381 4760 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315385 4760 flags.go:64] FLAG: --client-ca-file="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315389 4760 flags.go:64] FLAG: --cloud-config="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315393 4760 flags.go:64] FLAG: --cloud-provider="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315397 4760 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315417 4760 flags.go:64] FLAG: --cluster-domain="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315422 4760 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315426 4760 flags.go:64] FLAG: --config-dir="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315431 4760 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315436 4760 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315441 4760 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315445 4760 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315449 4760 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315454 4760 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315458 4760 flags.go:64] FLAG: --contention-profiling="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315462 4760 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315466 4760 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315470 4760 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315474 4760 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315480 4760 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315484 4760 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315488 4760 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315493 4760 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315497 4760 flags.go:64] FLAG: --enable-server="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315501 4760 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315507 4760 flags.go:64] FLAG: --event-burst="100" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315511 4760 flags.go:64] FLAG: --event-qps="50" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315515 4760 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315519 4760 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315523 4760 flags.go:64] FLAG: --eviction-hard="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315528 4760 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315534 4760 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315538 4760 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315542 4760 flags.go:64] FLAG: --eviction-soft="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315547 4760 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315551 4760 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315555 4760 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315559 4760 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315563 4760 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315568 4760 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315573 4760 flags.go:64] FLAG: --feature-gates="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315579 4760 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315584 4760 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315590 4760 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315595 4760 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315600 4760 flags.go:64] FLAG: --healthz-port="10248" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315605 4760 flags.go:64] FLAG: --help="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315611 4760 flags.go:64] FLAG: --hostname-override="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315615 4760 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315620 4760 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315626 4760 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315630 4760 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315635 4760 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315640 4760 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315645 4760 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315650 4760 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315654 4760 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315660 4760 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315666 4760 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315670 4760 flags.go:64] FLAG: --kube-reserved="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315675 4760 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315680 4760 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315685 4760 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315696 4760 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315701 4760 flags.go:64] FLAG: --lock-file="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315706 4760 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315711 4760 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315716 4760 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315723 4760 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315728 4760 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315734 4760 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315739 4760 flags.go:64] FLAG: --logging-format="text" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315743 4760 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315748 4760 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315753 4760 flags.go:64] FLAG: --manifest-url="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315758 4760 flags.go:64] FLAG: --manifest-url-header="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315765 4760 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315770 4760 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315776 4760 flags.go:64] FLAG: --max-pods="110" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315781 4760 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315786 4760 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315791 4760 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315796 4760 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315801 4760 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315806 4760 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315811 4760 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315824 4760 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315830 4760 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315835 4760 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315840 4760 flags.go:64] FLAG: --pod-cidr="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315844 4760 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315853 4760 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315857 4760 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315862 4760 flags.go:64] FLAG: --pods-per-core="0" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315866 4760 flags.go:64] FLAG: --port="10250" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315872 4760 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315877 4760 flags.go:64] FLAG: --provider-id="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315882 4760 flags.go:64] FLAG: --qos-reserved="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315886 4760 flags.go:64] FLAG: --read-only-port="10255" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315890 4760 flags.go:64] FLAG: --register-node="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315894 4760 flags.go:64] FLAG: --register-schedulable="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315898 4760 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315905 4760 flags.go:64] FLAG: --registry-burst="10" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315910 4760 flags.go:64] FLAG: --registry-qps="5" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315914 4760 flags.go:64] FLAG: --reserved-cpus="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315918 4760 flags.go:64] FLAG: --reserved-memory="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315923 4760 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315927 4760 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315931 4760 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315935 4760 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315939 4760 flags.go:64] FLAG: --runonce="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315943 4760 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315948 4760 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315952 4760 flags.go:64] FLAG: --seccomp-default="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315956 4760 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315960 4760 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315964 4760 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315968 4760 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315973 4760 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315977 4760 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315981 4760 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315985 4760 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315989 4760 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315993 4760 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.315997 4760 flags.go:64] FLAG: --system-cgroups="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316001 4760 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316007 4760 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316013 4760 flags.go:64] FLAG: --tls-cert-file="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316017 4760 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316023 4760 flags.go:64] FLAG: --tls-min-version="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316027 4760 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316031 4760 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316035 4760 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316039 4760 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316043 4760 flags.go:64] FLAG: --v="2" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316049 4760 flags.go:64] FLAG: --version="false" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316054 4760 flags.go:64] FLAG: --vmodule="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316059 4760 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.316063 4760 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316164 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316169 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316174 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316178 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316182 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316186 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316190 4760 feature_gate.go:330] unrecognized feature gate: Example Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316193 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316197 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316200 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316205 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316209 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316213 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316217 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316222 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316226 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316230 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316235 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316239 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316244 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316250 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316255 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316259 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316264 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316268 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316273 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.316353 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327486 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327523 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327529 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327533 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327539 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327545 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327550 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327556 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327564 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327574 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327580 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327595 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327617 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327624 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327628 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327636 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327641 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327645 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327651 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327655 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327659 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327663 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327669 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327676 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327683 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327688 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327730 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327735 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327740 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327747 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327751 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327757 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327761 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327766 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327770 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.327774 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.328085 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.328095 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.328100 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.328106 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.328111 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.328116 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.328119 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.328124 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.328132 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.334364 4760 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.334391 4760 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334461 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334469 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334473 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334476 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334480 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334484 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334488 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334491 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334495 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334500 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334505 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334510 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334514 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334519 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334524 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334528 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334534 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334538 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334543 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334547 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334551 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334557 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334565 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334573 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334578 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334583 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334587 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334593 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334597 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334601 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334606 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334609 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334613 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334617 4760 feature_gate.go:330] unrecognized feature gate: Example Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334620 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334624 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334627 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334631 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334635 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334638 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334642 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334645 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334649 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334652 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334658 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334663 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334667 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334672 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334676 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334680 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334683 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334687 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334690 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334693 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334697 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334701 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334704 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334707 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334711 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334714 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334719 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334724 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334728 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334732 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334736 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334740 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334743 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334747 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334750 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334754 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334757 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.334763 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334881 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334887 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334891 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334895 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334899 4760 feature_gate.go:330] unrecognized feature gate: Example Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334903 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334907 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334912 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334916 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334920 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334924 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334928 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334933 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334938 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334942 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334947 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334950 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334955 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334959 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334964 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334968 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334972 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334976 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334980 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334985 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334990 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.334994 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335000 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335005 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335010 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335014 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335018 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335022 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335026 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335031 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335035 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335040 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335044 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335048 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335051 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335055 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335059 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335063 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335067 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335087 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335092 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335096 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335101 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335105 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335110 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335114 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335119 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335124 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335128 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335132 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335137 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335140 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335145 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335148 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335152 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335158 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335162 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335166 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335171 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335175 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335179 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335184 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335188 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335192 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335196 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.335200 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.335206 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.335512 4760 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.338477 4760 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.338567 4760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.339194 4760 server.go:997] "Starting client certificate rotation" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.339213 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.339434 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-09 06:53:52.028802129 +0000 UTC Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.339577 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.365559 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.379504 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.90:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.385752 4760 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.392688 4760 log.go:25] "Validated CRI v1 runtime API" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.416826 4760 log.go:25] "Validated CRI v1 image API" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.418699 4760 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.421126 4760 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-17-56-24-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.421201 4760 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.462438 4760 manager.go:217] Machine: {Timestamp:2026-01-23 18:00:57.460340174 +0000 UTC m=+0.462798147 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9c1e1f52-8483-49f8-b4b1-1ac575f28e02 BootID:0551ff4f-58bc-46b1-acf7-c08b9bc381c4 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:92:b3:f1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:92:b3:f1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:15:21:37 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a6:ed:62 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8c:e6:5b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:25:c8:35 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:63:f6:38 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:56:05:a0:f6:dc:97 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:56:ad:9e:85:64:b4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.462829 4760 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.463032 4760 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.463565 4760 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.463872 4760 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.463931 4760 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.464453 4760 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.464474 4760 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.464787 4760 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.464837 4760 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.465360 4760 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.465516 4760 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.466641 4760 kubelet.go:418] "Attempting to sync node with API server" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.466691 4760 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.466740 4760 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.466768 4760 kubelet.go:324] "Adding apiserver pod source" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.466794 4760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.487143 4760 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.487749 4760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.489087 4760 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.489832 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.489884 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.489904 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.489922 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.489949 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.489971 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.489988 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.490014 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.490028 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.490041 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.494300 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.494325 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.494593 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.495288 4760 server.go:1280] "Started kubelet" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.495495 4760 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:00:57 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.499168 4760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.499648 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.499698 4760 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.502719 4760 server.go:460] "Adding debug handlers to kubelet server" Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.502744 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.502852 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.90:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.502944 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.502999 4760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.503333 4760 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.503362 4760 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.502951 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.503564 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.90:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.503583 4760 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.503782 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.504189 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:12:10.012660759 +0000 UTC Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.504260 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.504420 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.90:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.505296 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="200ms" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.505524 4760 factory.go:55] Registering systemd factory Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.505580 4760 factory.go:221] Registration of the systemd container factory successfully Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.505088 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.90:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d6e133abb0576 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 18:00:57.495242102 +0000 UTC m=+0.497700105,LastTimestamp:2026-01-23 18:00:57.495242102 +0000 UTC m=+0.497700105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.506711 4760 factory.go:153] Registering CRI-O factory Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.506737 4760 factory.go:221] Registration of the crio container factory successfully Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.506825 4760 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.506854 4760 factory.go:103] Registering Raw factory Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.506877 4760 manager.go:1196] Started watching for new ooms in manager Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.511189 4760 manager.go:319] Starting recovery of all containers Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514620 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514671 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514687 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514700 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514713 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514729 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514743 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514757 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514774 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514787 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514799 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514814 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514828 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514842 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514854 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514901 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514913 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.514927 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515664 4760 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515706 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515722 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515735 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515749 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515763 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515777 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515791 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515804 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515819 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515833 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515845 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515858 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515870 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515882 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515894 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515906 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515919 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515931 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515943 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515956 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515970 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515983 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.515996 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516009 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516023 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516036 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516051 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516066 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516083 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516100 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516114 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516131 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516146 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516158 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516209 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516240 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516258 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516273 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516289 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516304 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516319 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516336 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516351 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516368 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516383 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516401 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516445 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516462 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516479 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516496 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516516 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516532 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516549 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516566 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516581 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516598 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516615 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516632 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516651 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516669 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516687 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516704 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516721 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516751 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516770 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516788 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516812 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516827 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516842 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516855 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516869 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516882 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516895 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516910 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516925 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516940 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516954 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516968 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516981 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.516994 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517008 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517022 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517036 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517050 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517063 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517076 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517096 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517113 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517128 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517144 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517158 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517172 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517187 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517200 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517215 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517230 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517245 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517296 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517311 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517326 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517338 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517350 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517362 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517374 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517387 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517399 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517432 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517445 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517457 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517471 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517484 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517497 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517509 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517521 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517535 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517549 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517562 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517575 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517586 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517599 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517610 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517623 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517636 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517647 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517659 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517671 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517687 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517700 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517713 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517725 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517739 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517755 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517769 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517780 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517793 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517806 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517819 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517831 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517844 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517857 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517869 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517883 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517895 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517908 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517920 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517932 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517946 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517958 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517970 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517983 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.517996 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518009 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518023 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518036 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518048 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518093 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518107 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518119 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518134 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518146 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518158 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518171 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518184 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518196 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518208 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518222 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518234 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518248 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518261 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518275 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518287 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518299 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518311 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518325 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518338 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518350 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518362 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518375 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518389 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518402 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518448 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518461 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518473 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518487 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518499 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518512 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518524 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518537 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518550 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518562 4760 reconstruct.go:97] "Volume reconstruction finished" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.518573 4760 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.536819 4760 manager.go:324] Recovery completed Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.548567 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.549879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.549929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.549945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.551031 4760 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.551051 4760 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.551073 4760 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.561052 4760 policy_none.go:49] "None policy: Start" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.562328 4760 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.562364 4760 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.591744 4760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.593845 4760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.593907 4760 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.593943 4760 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.594146 4760 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:00:57 crc kubenswrapper[4760]: W0123 18:00:57.595884 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.596357 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.90:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.603898 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.617828 4760 manager.go:334] "Starting Device Plugin manager" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.617890 4760 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.617905 4760 server.go:79] "Starting device plugin registration server" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.618624 4760 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.618645 4760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.618926 4760 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.619062 4760 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.619070 4760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.640247 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.694784 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.694892 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.696446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.696482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.696493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.696641 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.697068 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.697106 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.697610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.697641 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.697650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.697783 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.697923 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.697964 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.698653 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.698690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.698698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.698947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.698957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.698980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.698963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.699015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.698992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.699134 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.699275 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.699306 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.699897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.699915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.699922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.700013 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.700371 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.700399 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.700464 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.700494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.700505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.700897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.700927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.700939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.701147 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.701181 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.701823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.701863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.701873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.702013 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.702090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.702106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.705957 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="400ms" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.719475 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.720883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.720941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.720957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.720990 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.721547 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.90:6443: connect: connection refused" node="crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.722633 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.722678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.722709 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.722740 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.722816 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.722881 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.722906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.722921 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.722985 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.723022 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.723054 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.723076 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.723128 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.723151 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.723197 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824315 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824388 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824472 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824499 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824533 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824685 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824807 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824844 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824580 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824950 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824992 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825010 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825028 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.824883 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825046 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825062 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825065 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825147 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825104 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825073 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.825100 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.922657 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.924343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.924375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.924386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:57 crc kubenswrapper[4760]: I0123 18:00:57.924429 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:00:57 crc kubenswrapper[4760]: E0123 18:00:57.924805 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.90:6443: connect: connection refused" node="crc" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.027364 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.046643 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.055740 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:00:58 crc kubenswrapper[4760]: W0123 18:00:58.061742 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-879778a00528befafa9d1a544f0c4e45837a224fd1d6d60aa2edf34373f6ce22 WatchSource:0}: Error finding container 879778a00528befafa9d1a544f0c4e45837a224fd1d6d60aa2edf34373f6ce22: Status 404 returned error can't find the container with id 879778a00528befafa9d1a544f0c4e45837a224fd1d6d60aa2edf34373f6ce22 Jan 23 18:00:58 crc kubenswrapper[4760]: W0123 18:00:58.067180 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-da2b32fbaa8b070118353060d6323f65efd4f978722425e156db5e83f2eef92c WatchSource:0}: Error finding container da2b32fbaa8b070118353060d6323f65efd4f978722425e156db5e83f2eef92c: Status 404 returned error can't find the container with id da2b32fbaa8b070118353060d6323f65efd4f978722425e156db5e83f2eef92c Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.070805 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.078007 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:00:58 crc kubenswrapper[4760]: W0123 18:00:58.087186 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4c61353045beff7eca2e22e50307dcebf7785e4291611e5e42a3cef66962b28f WatchSource:0}: Error finding container 4c61353045beff7eca2e22e50307dcebf7785e4291611e5e42a3cef66962b28f: Status 404 returned error can't find the container with id 4c61353045beff7eca2e22e50307dcebf7785e4291611e5e42a3cef66962b28f Jan 23 18:00:58 crc kubenswrapper[4760]: W0123 18:00:58.097336 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-eed9442e59fea9335104f3c80259d5eee978050051c8aff9059f7dcdcbb08e52 WatchSource:0}: Error finding container eed9442e59fea9335104f3c80259d5eee978050051c8aff9059f7dcdcbb08e52: Status 404 returned error can't find the container with id eed9442e59fea9335104f3c80259d5eee978050051c8aff9059f7dcdcbb08e52 Jan 23 18:00:58 crc kubenswrapper[4760]: E0123 18:00:58.107034 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="800ms" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.325881 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.328064 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.328136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.328151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.328185 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:00:58 crc kubenswrapper[4760]: E0123 18:00:58.328890 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.90:6443: connect: connection refused" node="crc" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.501509 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.504643 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:30:38.74256365 +0000 UTC Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.603332 4760 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338" exitCode=0 Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.603435 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338"} Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.603757 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eed9442e59fea9335104f3c80259d5eee978050051c8aff9059f7dcdcbb08e52"} Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.603869 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.605112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.605142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.605152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.605988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4"} Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.606018 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c61353045beff7eca2e22e50307dcebf7785e4291611e5e42a3cef66962b28f"} Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.607753 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367" exitCode=0 Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.607818 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367"} Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.607840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e77e7b091fbabfaf3f2ad09c2284a930268c69682f13cf305e7d3f3af2441ad7"} Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.607904 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.608613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.608655 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.608670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.610756 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:58 crc kubenswrapper[4760]: W0123 18:00:58.611121 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:58 crc kubenswrapper[4760]: E0123 18:00:58.611208 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.90:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.611792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.611857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.611873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.611865 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da" exitCode=0 Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.611950 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da"} Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.611987 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"da2b32fbaa8b070118353060d6323f65efd4f978722425e156db5e83f2eef92c"} Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.612287 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.613193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.613231 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.613244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.615046 4760 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="616ef6c428fe29187caae557f2c38644021a3b4abe827c7ff52bdea50884034b" exitCode=0 Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.615092 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"616ef6c428fe29187caae557f2c38644021a3b4abe827c7ff52bdea50884034b"} Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.615122 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"879778a00528befafa9d1a544f0c4e45837a224fd1d6d60aa2edf34373f6ce22"} Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.615203 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.616044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.616078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:58 crc kubenswrapper[4760]: I0123 18:00:58.616088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:58 crc kubenswrapper[4760]: W0123 18:00:58.732850 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:58 crc kubenswrapper[4760]: E0123 18:00:58.732929 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.90:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:00:58 crc kubenswrapper[4760]: W0123 18:00:58.807741 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:58 crc kubenswrapper[4760]: E0123 18:00:58.807828 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.90:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:00:58 crc kubenswrapper[4760]: E0123 18:00:58.907888 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="1.6s" Jan 23 18:00:58 crc kubenswrapper[4760]: W0123 18:00:58.949199 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:58 crc kubenswrapper[4760]: E0123 18:00:58.949709 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.90:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.129186 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.160285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.160446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.160525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.160658 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:00:59 crc kubenswrapper[4760]: E0123 18:00:59.161221 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.90:6443: connect: connection refused" node="crc" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.504857 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:35:51.116126748 +0000 UTC Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.505309 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.571049 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 18:00:59 crc kubenswrapper[4760]: E0123 18:00:59.571851 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.90:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.619514 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c34fdcb71cc1645d02a4a3eb4c244062e7defffc5fe2029c76bcfd46d69bb35a"} Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.619648 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.620811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.620845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.620877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.622829 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f"} Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.622904 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f"} Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.622916 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff"} Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.623014 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.652909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.652942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.652955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.654461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7"} Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.654499 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.654512 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a"} Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.654525 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d"} Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.655775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.655801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.655811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.657884 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0"} Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.657911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8"} Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.659313 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35" exitCode=0 Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.659340 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35"} Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.659462 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.662798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.662832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:00:59 crc kubenswrapper[4760]: I0123 18:00:59.662842 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.360683 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.500670 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.90:6443: connect: connection refused Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.507288 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:21:08.939991144 +0000 UTC Jan 23 18:01:00 crc kubenswrapper[4760]: E0123 18:01:00.508632 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="3.2s" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.672502 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c"} Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.672570 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.672587 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337"} Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.672637 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0"} Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.674029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.674073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.674088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.676912 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8" exitCode=0 Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.676979 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8"} Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.677088 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.677120 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.678546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.678578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.678591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.678895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.678957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.678974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.761689 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.762695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.762716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.762725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:00 crc kubenswrapper[4760]: I0123 18:01:00.762744 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.508239 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:57:18.82444255 +0000 UTC Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.549555 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.684308 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.684265 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700"} Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.684457 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.684462 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577"} Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.684693 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641"} Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.685130 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.685167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.685181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.685514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.685564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:01 crc kubenswrapper[4760]: I0123 18:01:01.685582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.509165 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 07:48:12.234101929 +0000 UTC Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.542084 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.693951 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833"} Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.694023 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451"} Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.694030 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.694102 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.694271 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.695707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.695788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.695796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.695808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.695813 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.695842 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.695853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.695820 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:02 crc kubenswrapper[4760]: I0123 18:01:02.695925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.147329 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.510292 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:05:20.277704161 +0000 UTC Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.697676 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.698817 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.699951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.699979 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.699993 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.700885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.700961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.700987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.799893 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 18:01:03 crc kubenswrapper[4760]: I0123 18:01:03.957380 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.021489 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.026823 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.089474 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.089702 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.090665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.090693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.090720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.201704 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.511137 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:10:56.035179569 +0000 UTC Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.700515 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.700547 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.700515 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.701800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.701844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.701873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.701882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.701886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.701901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.702153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.702188 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.702200 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:04 crc kubenswrapper[4760]: I0123 18:01:04.824693 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.512130 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:44:01.610729088 +0000 UTC Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.542493 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.542578 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.702772 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.702793 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.703825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.703851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.703862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.704121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.704154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:05 crc kubenswrapper[4760]: I0123 18:01:05.704164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:06 crc kubenswrapper[4760]: I0123 18:01:06.010989 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:01:06 crc kubenswrapper[4760]: I0123 18:01:06.011157 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:06 crc kubenswrapper[4760]: I0123 18:01:06.013053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:06 crc kubenswrapper[4760]: I0123 18:01:06.013106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:06 crc kubenswrapper[4760]: I0123 18:01:06.013122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:06 crc kubenswrapper[4760]: I0123 18:01:06.512396 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:30:40.3404251 +0000 UTC Jan 23 18:01:07 crc kubenswrapper[4760]: I0123 18:01:07.512706 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:56:19.182403067 +0000 UTC Jan 23 18:01:07 crc kubenswrapper[4760]: E0123 18:01:07.640824 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 18:01:08 crc kubenswrapper[4760]: I0123 18:01:08.513548 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:16:29.378974492 +0000 UTC Jan 23 18:01:09 crc kubenswrapper[4760]: I0123 18:01:09.514608 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:44:44.032838034 +0000 UTC Jan 23 18:01:10 crc kubenswrapper[4760]: I0123 18:01:10.261842 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 18:01:10 crc kubenswrapper[4760]: I0123 18:01:10.261903 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 18:01:10 crc kubenswrapper[4760]: I0123 18:01:10.266692 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 18:01:10 crc kubenswrapper[4760]: I0123 18:01:10.266757 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 18:01:10 crc kubenswrapper[4760]: I0123 18:01:10.365363 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:01:10 crc kubenswrapper[4760]: I0123 18:01:10.365548 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:10 crc kubenswrapper[4760]: I0123 18:01:10.366647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:10 crc kubenswrapper[4760]: I0123 18:01:10.366675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:10 crc kubenswrapper[4760]: I0123 18:01:10.366689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:10 crc kubenswrapper[4760]: I0123 18:01:10.515652 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:25:56.626320637 +0000 UTC Jan 23 18:01:11 crc kubenswrapper[4760]: I0123 18:01:11.515778 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:31:29.613775501 +0000 UTC Jan 23 18:01:11 crc kubenswrapper[4760]: I0123 18:01:11.549983 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 18:01:11 crc kubenswrapper[4760]: I0123 18:01:11.550033 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 18:01:12 crc kubenswrapper[4760]: I0123 18:01:12.516451 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:16:12.913896132 +0000 UTC Jan 23 18:01:13 crc kubenswrapper[4760]: I0123 18:01:13.517123 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:10:14.002101471 +0000 UTC Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.094794 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.094969 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.095324 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.095380 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.096843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.096890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.096903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.101341 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.517394 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:55:29.536889517 +0000 UTC Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.726739 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.727296 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.727360 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.728128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.728170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.728181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.851990 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.852147 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.853119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.853179 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.853189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:14 crc kubenswrapper[4760]: I0123 18:01:14.868826 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.268805 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.269084 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.312481 4760 trace.go:236] Trace[1863029785]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 18:01:01.480) (total time: 13832ms): Jan 23 18:01:15 crc kubenswrapper[4760]: Trace[1863029785]: ---"Objects listed" error: 13832ms (18:01:15.312) Jan 23 18:01:15 crc kubenswrapper[4760]: Trace[1863029785]: [13.8320305s] [13.8320305s] END Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.312523 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.313124 4760 trace.go:236] Trace[4693213]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 18:01:00.828) (total time: 14484ms): Jan 23 18:01:15 crc kubenswrapper[4760]: Trace[4693213]: ---"Objects listed" error: 14484ms (18:01:15.313) Jan 23 18:01:15 crc kubenswrapper[4760]: Trace[4693213]: [14.484272947s] [14.484272947s] END Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.313157 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.314072 4760 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.314149 4760 trace.go:236] Trace[1136746267]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 18:01:00.561) (total time: 14752ms): Jan 23 18:01:15 crc kubenswrapper[4760]: Trace[1136746267]: ---"Objects listed" error: 14752ms (18:01:15.313) Jan 23 18:01:15 crc kubenswrapper[4760]: Trace[1136746267]: [14.752539772s] [14.752539772s] END Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.314180 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.326101 4760 trace.go:236] Trace[706702824]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 18:01:01.301) (total time: 14024ms): Jan 23 18:01:15 crc kubenswrapper[4760]: Trace[706702824]: ---"Objects listed" error: 14024ms (18:01:15.326) Jan 23 18:01:15 crc kubenswrapper[4760]: Trace[706702824]: [14.024996188s] [14.024996188s] END Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.326129 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.330174 4760 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.370639 4760 csr.go:261] certificate signing request csr-2sjrf is approved, waiting to be issued Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.371549 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.376258 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.379672 4760 csr.go:257] certificate signing request csr-2sjrf is issued Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.495945 4760 apiserver.go:52] "Watching apiserver" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.497938 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.498425 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.499589 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.499792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.499822 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.499893 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.499814 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.499940 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.499801 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.500321 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.500347 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.502088 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.502146 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.502794 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.502794 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.503007 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.503270 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.504017 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.504343 4760 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.505295 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.506135 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514646 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514687 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514713 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514738 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514760 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514789 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514811 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514837 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514858 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514878 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514898 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514918 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.514945 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515003 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515015 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515070 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515119 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515116 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515142 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515166 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515191 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515215 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515238 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515261 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515287 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515310 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515331 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515336 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515352 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515374 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515377 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515372 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515397 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515546 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515568 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515586 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515604 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515620 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515636 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515655 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515674 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515702 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515724 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515741 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515758 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515777 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515795 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515811 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515828 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515844 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515873 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515904 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515922 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515936 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515953 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.515986 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516003 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516019 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516036 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516052 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516077 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516094 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516111 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516125 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516156 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516175 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516263 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516283 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516299 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516313 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516305 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516330 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516349 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516365 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516380 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516396 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516431 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516447 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516464 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516479 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516496 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516533 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516519 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516511 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516626 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516849 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516876 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516897 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516915 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516935 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516953 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517022 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517039 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517055 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517075 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517147 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517163 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517179 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517196 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517212 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517230 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517246 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517265 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517284 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517301 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517317 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517332 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517363 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517379 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517395 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517426 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517443 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517458 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517474 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517508 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517604 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517640 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517657 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517674 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517689 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517709 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517726 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517742 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517758 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517777 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517794 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517810 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517830 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517847 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517890 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517907 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517924 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.517941 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.519786 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.519820 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.519854 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.519875 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.519894 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.519913 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.519930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.519951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.519970 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.519989 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520155 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520183 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520204 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520222 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520240 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520256 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520361 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520418 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520456 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520475 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520495 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520513 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520537 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520555 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520571 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520606 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520641 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520660 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520677 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520703 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520720 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520736 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520753 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520769 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520785 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520803 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520821 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520840 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520857 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520873 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520892 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520910 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520927 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520943 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520960 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520976 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520993 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521012 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521030 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521054 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521073 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516627 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516640 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.516641 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.520064 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:35:48.117966928 +0000 UTC Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521548 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521616 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521652 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521669 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521091 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521775 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521821 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521860 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521893 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521921 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.521978 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522029 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522056 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522083 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522113 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522142 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522166 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522195 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522221 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522247 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522438 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522400 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522504 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522614 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.522983 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.523108 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.523309 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.523729 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.523787 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.524088 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.524134 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.524393 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.524467 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.524697 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.526627 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.527573 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.527581 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.527746 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.528008 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.528091 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.528214 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.528367 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.528559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.528738 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.529004 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.529008 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.528352 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.529540 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.529553 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.529940 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.530038 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.530283 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.531797 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.531856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.531977 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.532076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.532120 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.532138 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.532175 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.532226 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.532260 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.532655 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.532744 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.532782 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.532973 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.533102 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.533134 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.533655 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.533754 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.533939 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.533992 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.534113 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.534242 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.534321 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.534340 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.534530 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.534377 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.534512 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.534576 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.534899 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.535470 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.535491 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.536686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.536734 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.536770 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.536880 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.536938 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.536548 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.537179 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.537192 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.537394 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:16.037371994 +0000 UTC m=+19.039829987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.537441 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.537552 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.537559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.537589 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.537601 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.537619 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.537796 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.537896 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.538021 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:16.038003833 +0000 UTC m=+19.040461766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.538018 4760 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.538080 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.537186 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.538342 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.538387 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.538986 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.539679 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.539806 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.539884 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.539988 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.540055 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.540851 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.540901 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.542114 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.542225 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.542331 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.542356 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.542369 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.542799 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.542990 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.543065 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.543323 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.543321 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.543360 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.543732 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.543910 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.544054 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.544666 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.544899 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.545158 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.545216 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.545421 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.547025 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.552981 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.553040 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.556047 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.556091 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.556639 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.556809 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.557143 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.557624 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.557682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.557960 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.562240 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.562367 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.562459 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.562547 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.562589 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.562600 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:01:16.062570917 +0000 UTC m=+19.065028850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.556103 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.562770 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.562832 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.562894 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.562952 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563015 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563080 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563144 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563200 4760 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563257 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563309 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563366 4760 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563444 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563513 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563597 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563655 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563713 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563765 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563823 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563885 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563945 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.563053 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.564696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.565071 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.565096 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.565120 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.565117 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.565071 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.565165 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.565186 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.565228 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:16.065191313 +0000 UTC m=+19.067649286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.565248 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:16.065240285 +0000 UTC m=+19.067698318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.565835 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.566392 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.567418 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.567505 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.567699 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.567832 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.567869 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.567990 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.568101 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.568446 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.568468 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.568994 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.569207 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.570116 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.570123 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.570215 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.568718 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.570369 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.570873 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.575087 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.575176 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.575814 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.575894 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.576004 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.577926 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.578326 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.578839 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.580059 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.580183 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.580270 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.580360 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.580586 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.580596 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.581361 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.581654 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.581691 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.581841 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.581908 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.582219 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.582493 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.582748 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.582900 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.583807 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.588147 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.590491 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.590541 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.591115 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.591858 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.591872 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.592296 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.595689 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.595794 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.596079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.596173 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.597259 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.598340 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.598609 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.598832 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.599037 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.599103 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.599446 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.599517 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.599614 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.599981 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.601728 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.603125 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.603925 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.604419 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.605049 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.606105 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.607995 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.608852 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.610274 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.610527 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.612753 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.615158 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.617178 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.618083 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.618114 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.618843 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.619962 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.621608 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.621965 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.622606 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.623359 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.624867 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.625382 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.625623 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.625683 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.626464 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.627027 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.627692 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.628740 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.629304 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.629778 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.630770 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.631182 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.632340 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.632441 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.633096 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.633947 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.634606 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.635658 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.636116 4760 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.636207 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.638165 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.638808 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.639213 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.640695 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.641723 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.642282 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.643290 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.644133 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.644958 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.645575 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.646592 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.647588 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.648087 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.648193 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.648729 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.649672 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.650542 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.651380 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.651868 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.652666 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.653166 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.653715 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.654545 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668749 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668788 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668913 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668924 4760 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668935 4760 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668943 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668951 4760 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668960 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668968 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668976 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668983 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668991 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.668999 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669008 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669015 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669023 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669031 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669039 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669048 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669057 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669066 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669075 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669083 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669092 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669100 4760 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669108 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669115 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669124 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669132 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669141 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669151 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669159 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669167 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669175 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669183 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669191 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669199 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669207 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669215 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669223 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669231 4760 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669238 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669247 4760 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669255 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669527 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669538 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669548 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669558 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669568 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669579 4760 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669589 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669598 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669608 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669618 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669626 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669634 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669642 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669649 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669657 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669665 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669674 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669683 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669691 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669698 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669705 4760 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669713 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669722 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669731 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669740 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669748 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669757 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669765 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669774 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669781 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669789 4760 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669797 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669805 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669812 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669820 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669830 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669837 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669845 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669853 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669861 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669870 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669878 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669885 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669893 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669900 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669908 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669916 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669924 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669932 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669940 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669949 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669958 4760 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669965 4760 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669973 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669983 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.669992 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670000 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670007 4760 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670016 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670024 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670032 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670040 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670049 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670057 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670064 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670072 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670080 4760 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670089 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670097 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670105 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670112 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670120 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670128 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670136 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670144 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670152 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670159 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670167 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670176 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670184 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670192 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670199 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670216 4760 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670223 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670231 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670239 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670247 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670254 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670262 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670269 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670277 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670285 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670295 4760 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670303 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670311 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670319 4760 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670327 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670334 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670342 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670351 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670358 4760 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670370 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670378 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670390 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670398 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670423 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670434 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670445 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670453 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670461 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670469 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670477 4760 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670485 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670493 4760 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670501 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670508 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670517 4760 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670524 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670532 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670540 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670548 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670556 4760 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670563 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670571 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670579 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670588 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670596 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670603 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670611 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670804 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.670860 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:01:15 crc kubenswrapper[4760]: E0123 18:01:15.733809 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.741495 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.816782 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.839579 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:01:15 crc kubenswrapper[4760]: W0123 18:01:15.850046 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-58aee0b7f20fd1312dd502c958aa441239a119f1cce754eb06d2de615b1dc851 WatchSource:0}: Error finding container 58aee0b7f20fd1312dd502c958aa441239a119f1cce754eb06d2de615b1dc851: Status 404 returned error can't find the container with id 58aee0b7f20fd1312dd502c958aa441239a119f1cce754eb06d2de615b1dc851 Jan 23 18:01:15 crc kubenswrapper[4760]: I0123 18:01:15.854561 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:01:15 crc kubenswrapper[4760]: W0123 18:01:15.871376 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-421ad213832c793926d0b216d325b9aabd45f5c65a6b0849bd8089e4015579ae WatchSource:0}: Error finding container 421ad213832c793926d0b216d325b9aabd45f5c65a6b0849bd8089e4015579ae: Status 404 returned error can't find the container with id 421ad213832c793926d0b216d325b9aabd45f5c65a6b0849bd8089e4015579ae Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.073879 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.073939 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.073961 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.073984 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.074013 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074064 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074077 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:01:17.074059018 +0000 UTC m=+20.076516951 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074125 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:17.07411229 +0000 UTC m=+20.076570223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074129 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074147 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074170 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074182 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074209 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:17.074187662 +0000 UTC m=+20.076645645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074207 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074246 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074228 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:17.074220313 +0000 UTC m=+20.076678356 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074260 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.074319 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:17.074303555 +0000 UTC m=+20.076761498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.380825 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 17:56:15 +0000 UTC, rotation deadline is 2026-11-08 06:34:58.369288367 +0000 UTC Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.380903 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6924h33m41.988388629s for next certificate rotation Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.521775 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:29:13.473726139 +0000 UTC Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.595119 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:16 crc kubenswrapper[4760]: E0123 18:01:16.595593 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.733026 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48"} Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.733081 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18"} Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.733094 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"421ad213832c793926d0b216d325b9aabd45f5c65a6b0849bd8089e4015579ae"} Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.734155 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"58aee0b7f20fd1312dd502c958aa441239a119f1cce754eb06d2de615b1dc851"} Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.735371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f"} Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.735428 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"042acd40c2c97f15fb20144282191ba3d9afb89e364905396703f8453d820712"} Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.736743 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.738236 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337" exitCode=255 Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.738313 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337"} Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.752989 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vgrsn"] Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.753266 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.754636 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.754784 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.754902 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.755538 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.758086 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-h6qwf"] Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.758339 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-h6qwf" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.760310 4760 scope.go:117] "RemoveContainer" containerID="3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.761188 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.763317 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.763356 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.772131 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.773906 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.797104 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.821572 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.839445 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.855949 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.876907 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.880255 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5f722071-172e-4ab9-9cf5-67e13dfe9aea-serviceca\") pod \"node-ca-vgrsn\" (UID: \"5f722071-172e-4ab9-9cf5-67e13dfe9aea\") " pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.880293 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f722071-172e-4ab9-9cf5-67e13dfe9aea-host\") pod \"node-ca-vgrsn\" (UID: \"5f722071-172e-4ab9-9cf5-67e13dfe9aea\") " pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.880347 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446tn\" (UniqueName: \"kubernetes.io/projected/5f722071-172e-4ab9-9cf5-67e13dfe9aea-kube-api-access-446tn\") pod \"node-ca-vgrsn\" (UID: \"5f722071-172e-4ab9-9cf5-67e13dfe9aea\") " pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.880979 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5f97\" (UniqueName: \"kubernetes.io/projected/80c0c68a-6978-4fa1-82c6-3fb16bcce76b-kube-api-access-n5f97\") pod \"node-resolver-h6qwf\" (UID: \"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\") " pod="openshift-dns/node-resolver-h6qwf" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.881054 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80c0c68a-6978-4fa1-82c6-3fb16bcce76b-hosts-file\") pod \"node-resolver-h6qwf\" (UID: \"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\") " pod="openshift-dns/node-resolver-h6qwf" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.911678 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.930638 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.955884 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.973042 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.982018 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-446tn\" (UniqueName: \"kubernetes.io/projected/5f722071-172e-4ab9-9cf5-67e13dfe9aea-kube-api-access-446tn\") pod \"node-ca-vgrsn\" (UID: \"5f722071-172e-4ab9-9cf5-67e13dfe9aea\") " pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.982098 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80c0c68a-6978-4fa1-82c6-3fb16bcce76b-hosts-file\") pod \"node-resolver-h6qwf\" (UID: \"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\") " pod="openshift-dns/node-resolver-h6qwf" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.982126 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5f97\" (UniqueName: \"kubernetes.io/projected/80c0c68a-6978-4fa1-82c6-3fb16bcce76b-kube-api-access-n5f97\") pod \"node-resolver-h6qwf\" (UID: \"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\") " pod="openshift-dns/node-resolver-h6qwf" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.982146 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5f722071-172e-4ab9-9cf5-67e13dfe9aea-serviceca\") pod \"node-ca-vgrsn\" (UID: \"5f722071-172e-4ab9-9cf5-67e13dfe9aea\") " pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.982161 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f722071-172e-4ab9-9cf5-67e13dfe9aea-host\") pod \"node-ca-vgrsn\" (UID: \"5f722071-172e-4ab9-9cf5-67e13dfe9aea\") " pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.982204 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f722071-172e-4ab9-9cf5-67e13dfe9aea-host\") pod \"node-ca-vgrsn\" (UID: \"5f722071-172e-4ab9-9cf5-67e13dfe9aea\") " pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.982214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80c0c68a-6978-4fa1-82c6-3fb16bcce76b-hosts-file\") pod \"node-resolver-h6qwf\" (UID: \"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\") " pod="openshift-dns/node-resolver-h6qwf" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.983910 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5f722071-172e-4ab9-9cf5-67e13dfe9aea-serviceca\") pod \"node-ca-vgrsn\" (UID: \"5f722071-172e-4ab9-9cf5-67e13dfe9aea\") " pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:16 crc kubenswrapper[4760]: I0123 18:01:16.988787 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.000066 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.018473 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.031646 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.043445 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.057595 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.069807 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.074670 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5f97\" (UniqueName: \"kubernetes.io/projected/80c0c68a-6978-4fa1-82c6-3fb16bcce76b-kube-api-access-n5f97\") pod \"node-resolver-h6qwf\" (UID: \"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\") " pod="openshift-dns/node-resolver-h6qwf" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.075403 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-446tn\" (UniqueName: \"kubernetes.io/projected/5f722071-172e-4ab9-9cf5-67e13dfe9aea-kube-api-access-446tn\") pod \"node-ca-vgrsn\" (UID: \"5f722071-172e-4ab9-9cf5-67e13dfe9aea\") " pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.081624 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.082686 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-h6qwf" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.082829 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.082906 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.082939 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.082967 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083032 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:01:19.083002631 +0000 UTC m=+22.085460574 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083057 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083057 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083091 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083077 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083163 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083206 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:19.083186307 +0000 UTC m=+22.085644290 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.083241 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083262 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083276 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:19.083258919 +0000 UTC m=+22.085716852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083276 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083293 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:19.083285549 +0000 UTC m=+22.085743482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083294 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.083325 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:19.08331608 +0000 UTC m=+22.085774013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.095010 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.339925 4760 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 18:01:17 crc kubenswrapper[4760]: W0123 18:01:17.340182 4760 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 18:01:17 crc kubenswrapper[4760]: W0123 18:01:17.340188 4760 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 23 18:01:17 crc kubenswrapper[4760]: W0123 18:01:17.340240 4760 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 23 18:01:17 crc kubenswrapper[4760]: W0123 18:01:17.340191 4760 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 18:01:17 crc kubenswrapper[4760]: W0123 18:01:17.340272 4760 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 18:01:17 crc kubenswrapper[4760]: W0123 18:01:17.340364 4760 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 23 18:01:17 crc kubenswrapper[4760]: W0123 18:01:17.343345 4760 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.366742 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vgrsn" Jan 23 18:01:17 crc kubenswrapper[4760]: W0123 18:01:17.377016 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f722071_172e_4ab9_9cf5_67e13dfe9aea.slice/crio-1c212a4023116956effc7e1c841a9634461d13e4dcacbf82f3e991b03a0365af WatchSource:0}: Error finding container 1c212a4023116956effc7e1c841a9634461d13e4dcacbf82f3e991b03a0365af: Status 404 returned error can't find the container with id 1c212a4023116956effc7e1c841a9634461d13e4dcacbf82f3e991b03a0365af Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.522173 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:15:37.109581856 +0000 UTC Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.533281 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-7ck54"] Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.533611 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.535754 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-58zkr"] Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.536631 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.536748 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.537130 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.537136 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.537164 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.538575 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.538729 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-66s9m"] Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.538871 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.539369 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.540186 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.540884 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.541021 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.541226 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-6xsk7"] Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.541439 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.541489 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.541643 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.541664 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.541748 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.546683 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.546744 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.546762 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.546780 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.546881 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.554102 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.556026 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.567646 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.590655 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.597802 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.597848 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.598115 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:17 crc kubenswrapper[4760]: E0123 18:01:17.598155 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.602659 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.613926 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.627459 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.648871 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.661537 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.678581 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.688899 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ac96490a-85b1-48f4-99d1-2b7505744007-cni-binary-copy\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.689130 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-multus-socket-dir-parent\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.689256 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-bin\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.689365 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-cnibin\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.689473 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-script-lib\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.689585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-system-cni-dir\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.689698 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20652c61-310f-464d-ae66-dfc025a16b8d-proxy-tls\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.689833 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-systemd-units\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.689933 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-var-lib-openvswitch\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-config\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690143 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-os-release\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690241 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ac96490a-85b1-48f4-99d1-2b7505744007-multus-daemon-config\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690334 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-node-log\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-run-k8s-cni-cncf-io\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690606 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690672 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03a394da-f311-4268-9011-d781ba14cb3f-ovn-node-metrics-cert\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690706 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-hostroot\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690730 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-etc-kubernetes\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690792 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-kubelet\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690770 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690822 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hjgl\" (UniqueName: \"kubernetes.io/projected/dd0f369c-16f4-4156-9b96-cef4c4fad7db-kube-api-access-8hjgl\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690850 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2hj9\" (UniqueName: \"kubernetes.io/projected/20652c61-310f-464d-ae66-dfc025a16b8d-kube-api-access-p2hj9\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690872 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-var-lib-cni-multus\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690893 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-multus-conf-dir\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690922 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/20652c61-310f-464d-ae66-dfc025a16b8d-rootfs\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-netns\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690965 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-system-cni-dir\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.690986 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-os-release\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691010 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-var-lib-cni-bin\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691037 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dd0f369c-16f4-4156-9b96-cef4c4fad7db-cni-binary-copy\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691060 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-slash\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691082 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-multus-cni-dir\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691105 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4b4j\" (UniqueName: \"kubernetes.io/projected/ac96490a-85b1-48f4-99d1-2b7505744007-kube-api-access-c4b4j\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691129 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-ovn-kubernetes\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691152 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-env-overrides\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691194 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20652c61-310f-464d-ae66-dfc025a16b8d-mcd-auth-proxy-config\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691215 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-run-netns\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691236 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-systemd\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691256 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-ovn\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691284 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-log-socket\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691304 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/dd0f369c-16f4-4156-9b96-cef4c4fad7db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691339 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-run-multus-certs\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-528cp\" (UniqueName: \"kubernetes.io/projected/03a394da-f311-4268-9011-d781ba14cb3f-kube-api-access-528cp\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691425 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-cnibin\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691469 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-netd\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691491 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-openvswitch\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691514 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-var-lib-kubelet\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.691557 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-etc-openvswitch\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.699581 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.712290 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.724092 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.735437 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.742434 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-h6qwf" event={"ID":"80c0c68a-6978-4fa1-82c6-3fb16bcce76b","Type":"ContainerStarted","Data":"3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f"} Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.742479 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-h6qwf" event={"ID":"80c0c68a-6978-4fa1-82c6-3fb16bcce76b","Type":"ContainerStarted","Data":"5ad8874b642b57f7d28539d02e709a111f61ba5a91e48676b80af09529e0b31d"} Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.743808 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vgrsn" event={"ID":"5f722071-172e-4ab9-9cf5-67e13dfe9aea","Type":"ContainerStarted","Data":"4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040"} Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.743850 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vgrsn" event={"ID":"5f722071-172e-4ab9-9cf5-67e13dfe9aea","Type":"ContainerStarted","Data":"1c212a4023116956effc7e1c841a9634461d13e4dcacbf82f3e991b03a0365af"} Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.745664 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.747475 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff"} Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.748929 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.759391 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.771032 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.785190 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792448 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-slash\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792503 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-ovn-kubernetes\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-env-overrides\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792553 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-multus-cni-dir\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792574 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4b4j\" (UniqueName: \"kubernetes.io/projected/ac96490a-85b1-48f4-99d1-2b7505744007-kube-api-access-c4b4j\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792596 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20652c61-310f-464d-ae66-dfc025a16b8d-mcd-auth-proxy-config\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792592 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-slash\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792620 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-ovn-kubernetes\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792616 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-run-netns\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792668 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-run-netns\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792744 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-systemd\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792765 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-ovn\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792788 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-log-socket\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/dd0f369c-16f4-4156-9b96-cef4c4fad7db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792829 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-systemd\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792831 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-run-multus-certs\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792861 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-ovn\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792880 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-log-socket\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792909 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-netd\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792887 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-netd\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792859 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-multus-cni-dir\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792921 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-run-multus-certs\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792961 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-528cp\" (UniqueName: \"kubernetes.io/projected/03a394da-f311-4268-9011-d781ba14cb3f-kube-api-access-528cp\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.792993 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-cnibin\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793022 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-etc-openvswitch\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-openvswitch\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793067 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-var-lib-kubelet\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793103 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-cnibin\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-bin\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793115 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-bin\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793155 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-openvswitch\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-var-lib-kubelet\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793181 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-cnibin\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793190 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-etc-openvswitch\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ac96490a-85b1-48f4-99d1-2b7505744007-cni-binary-copy\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793215 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-cnibin\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793297 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-multus-socket-dir-parent\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793353 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-systemd-units\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-multus-socket-dir-parent\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793376 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-var-lib-openvswitch\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793400 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-systemd-units\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793419 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-script-lib\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793440 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-var-lib-openvswitch\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793471 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-system-cni-dir\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20652c61-310f-464d-ae66-dfc025a16b8d-proxy-tls\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793510 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ac96490a-85b1-48f4-99d1-2b7505744007-multus-daemon-config\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793520 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793528 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-config\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793592 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-os-release\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793527 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-system-cni-dir\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-node-log\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793657 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-run-k8s-cni-cncf-io\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793703 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03a394da-f311-4268-9011-d781ba14cb3f-ovn-node-metrics-cert\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793754 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-hostroot\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/dd0f369c-16f4-4156-9b96-cef4c4fad7db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793776 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-etc-kubernetes\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793799 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-kubelet\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793822 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hjgl\" (UniqueName: \"kubernetes.io/projected/dd0f369c-16f4-4156-9b96-cef4c4fad7db-kube-api-access-8hjgl\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793851 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-run-k8s-cni-cncf-io\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2hj9\" (UniqueName: \"kubernetes.io/projected/20652c61-310f-464d-ae66-dfc025a16b8d-kube-api-access-p2hj9\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793884 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-etc-kubernetes\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793888 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-kubelet\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793885 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-var-lib-cni-multus\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793912 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-var-lib-cni-multus\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793923 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-multus-conf-dir\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793939 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-system-cni-dir\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793954 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/20652c61-310f-464d-ae66-dfc025a16b8d-rootfs\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793969 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-netns\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793983 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-os-release\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793997 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-var-lib-cni-bin\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.794012 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dd0f369c-16f4-4156-9b96-cef4c4fad7db-cni-binary-copy\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.794555 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-node-log\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.795341 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dd0f369c-16f4-4156-9b96-cef4c4fad7db-cni-binary-copy\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.795848 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ac96490a-85b1-48f4-99d1-2b7505744007-multus-daemon-config\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.795956 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-script-lib\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.795993 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-multus-conf-dir\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.796294 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-env-overrides\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.796760 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dd0f369c-16f4-4156-9b96-cef4c4fad7db-os-release\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.796853 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-hostroot\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.796937 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-netns\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.797012 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-system-cni-dir\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.797086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/20652c61-310f-464d-ae66-dfc025a16b8d-rootfs\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.793863 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ac96490a-85b1-48f4-99d1-2b7505744007-cni-binary-copy\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.797148 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-os-release\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.797163 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ac96490a-85b1-48f4-99d1-2b7505744007-host-var-lib-cni-bin\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.797239 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-config\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.797718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20652c61-310f-464d-ae66-dfc025a16b8d-mcd-auth-proxy-config\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.806791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03a394da-f311-4268-9011-d781ba14cb3f-ovn-node-metrics-cert\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.807069 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.808106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20652c61-310f-464d-ae66-dfc025a16b8d-proxy-tls\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.812035 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4b4j\" (UniqueName: \"kubernetes.io/projected/ac96490a-85b1-48f4-99d1-2b7505744007-kube-api-access-c4b4j\") pod \"multus-7ck54\" (UID: \"ac96490a-85b1-48f4-99d1-2b7505744007\") " pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.817237 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hjgl\" (UniqueName: \"kubernetes.io/projected/dd0f369c-16f4-4156-9b96-cef4c4fad7db-kube-api-access-8hjgl\") pod \"multus-additional-cni-plugins-66s9m\" (UID: \"dd0f369c-16f4-4156-9b96-cef4c4fad7db\") " pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.818261 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2hj9\" (UniqueName: \"kubernetes.io/projected/20652c61-310f-464d-ae66-dfc025a16b8d-kube-api-access-p2hj9\") pod \"machine-config-daemon-6xsk7\" (UID: \"20652c61-310f-464d-ae66-dfc025a16b8d\") " pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.819123 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.819607 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-528cp\" (UniqueName: \"kubernetes.io/projected/03a394da-f311-4268-9011-d781ba14cb3f-kube-api-access-528cp\") pod \"ovnkube-node-58zkr\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.831568 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.847156 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7ck54" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.849634 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.863216 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.869874 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.870257 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-66s9m" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.876449 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.883187 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.897281 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: W0123 18:01:17.903284 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03a394da_f311_4268_9011_d781ba14cb3f.slice/crio-c875a73d78524a85d0e23411dd91c84150f9863f072e7c72f3588a961aeef846 WatchSource:0}: Error finding container c875a73d78524a85d0e23411dd91c84150f9863f072e7c72f3588a961aeef846: Status 404 returned error can't find the container with id c875a73d78524a85d0e23411dd91c84150f9863f072e7c72f3588a961aeef846 Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.910427 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: W0123 18:01:17.926002 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd0f369c_16f4_4156_9b96_cef4c4fad7db.slice/crio-52acdb125d3ccfcab75fbd7459e433afef264b1fef966b45fe2a16d29fbe8b0d WatchSource:0}: Error finding container 52acdb125d3ccfcab75fbd7459e433afef264b1fef966b45fe2a16d29fbe8b0d: Status 404 returned error can't find the container with id 52acdb125d3ccfcab75fbd7459e433afef264b1fef966b45fe2a16d29fbe8b0d Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.937112 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.956047 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.978883 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:17 crc kubenswrapper[4760]: I0123 18:01:17.994909 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.011812 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.031456 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.052465 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.064808 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.081147 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.103256 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.143393 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.161707 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.203049 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.215359 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.228662 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.245619 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.262927 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.276583 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.289939 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.301210 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.312497 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.333157 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.333233 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.346187 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.360728 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.373069 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.385582 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.397427 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.398331 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.407706 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.419552 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.441590 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.469773 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.471337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.471469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.471572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.471740 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.484729 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.515093 4760 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.515328 4760 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.516253 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.516357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.516437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.516516 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.516598 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:18Z","lastTransitionTime":"2026-01-23T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.523257 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:59:13.12460481 +0000 UTC Jan 23 18:01:18 crc kubenswrapper[4760]: E0123 18:01:18.533168 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.534627 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.537167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.537221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.537234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.537251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.537265 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:18Z","lastTransitionTime":"2026-01-23T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:18 crc kubenswrapper[4760]: E0123 18:01:18.548786 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.551915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.552046 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.552104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.552162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.552214 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:18Z","lastTransitionTime":"2026-01-23T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:18 crc kubenswrapper[4760]: E0123 18:01:18.564334 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.565238 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.571476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.571513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.571525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.571542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.571553 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:18Z","lastTransitionTime":"2026-01-23T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:18 crc kubenswrapper[4760]: E0123 18:01:18.587949 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.591093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.591125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.591136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.591149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.591159 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:18Z","lastTransitionTime":"2026-01-23T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.594271 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:18 crc kubenswrapper[4760]: E0123 18:01:18.594376 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:18 crc kubenswrapper[4760]: E0123 18:01:18.602881 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: E0123 18:01:18.603037 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.604324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.604370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.604383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.604399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.604425 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:18Z","lastTransitionTime":"2026-01-23T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.640065 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.706246 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.706292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.706302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.706317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.706329 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:18Z","lastTransitionTime":"2026-01-23T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.727321 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.752033 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1" exitCode=0 Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.752104 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.752142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"c875a73d78524a85d0e23411dd91c84150f9863f072e7c72f3588a961aeef846"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.754058 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.754152 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.754182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"a442edadc7036b2a13789b1eec7f3df8dc2d16be56eac5a48b77566c5ce9efa6"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.757190 4760 generic.go:334] "Generic (PLEG): container finished" podID="dd0f369c-16f4-4156-9b96-cef4c4fad7db" containerID="15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365" exitCode=0 Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.757280 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" event={"ID":"dd0f369c-16f4-4156-9b96-cef4c4fad7db","Type":"ContainerDied","Data":"15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.757312 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" event={"ID":"dd0f369c-16f4-4156-9b96-cef4c4fad7db","Type":"ContainerStarted","Data":"52acdb125d3ccfcab75fbd7459e433afef264b1fef966b45fe2a16d29fbe8b0d"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.759250 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.761110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7ck54" event={"ID":"ac96490a-85b1-48f4-99d1-2b7505744007","Type":"ContainerStarted","Data":"02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.761160 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7ck54" event={"ID":"ac96490a-85b1-48f4-99d1-2b7505744007","Type":"ContainerStarted","Data":"296d822546b7701f8e05619db39224635869404efaf1e66304f6eb753a6ae794"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.761813 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.770865 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.788261 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.798504 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.819517 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.819922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.819973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.819984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.820003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.820091 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:18Z","lastTransitionTime":"2026-01-23T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.834054 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.849744 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.889419 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.922933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.922975 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.922985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.923000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.923009 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:18Z","lastTransitionTime":"2026-01-23T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.925780 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.935586 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 18:01:18 crc kubenswrapper[4760]: I0123 18:01:18.988803 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.024615 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.025426 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.025546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.025645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.025713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.025768 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:19Z","lastTransitionTime":"2026-01-23T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.062773 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.106312 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.107596 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.107711 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.107781 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:01:23.107742228 +0000 UTC m=+26.110200211 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.107832 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.107876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.107900 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:23.107883772 +0000 UTC m=+26.110341705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.107999 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.108057 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.108085 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.108122 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.108136 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.108197 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.108267 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.108327 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.108213 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:23.10819335 +0000 UTC m=+26.110651293 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.108355 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.108369 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:23.108354645 +0000 UTC m=+26.110812618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.108500 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:23.108474709 +0000 UTC m=+26.110932682 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.129793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.129838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.129853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.129892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.129906 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:19Z","lastTransitionTime":"2026-01-23T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.157317 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.186529 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.227897 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.232601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.232643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.232654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.232669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.232681 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:19Z","lastTransitionTime":"2026-01-23T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.263869 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.301566 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.335276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.335313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.335321 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.335337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.335346 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:19Z","lastTransitionTime":"2026-01-23T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.343620 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.395665 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.425117 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.437889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.437959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.437984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.438012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.438034 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:19Z","lastTransitionTime":"2026-01-23T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.462622 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.503991 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.524355 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:53:16.402136065 +0000 UTC Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.540362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.540401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.540432 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.540449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.540461 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:19Z","lastTransitionTime":"2026-01-23T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.544626 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.586632 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.594287 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.594347 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.594433 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:19 crc kubenswrapper[4760]: E0123 18:01:19.594644 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.622515 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.642301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.642343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.642353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.642368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.642378 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:19Z","lastTransitionTime":"2026-01-23T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.666099 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.701240 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.742652 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.743929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.743961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.743974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.743992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.744004 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:19Z","lastTransitionTime":"2026-01-23T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.765964 4760 generic.go:334] "Generic (PLEG): container finished" podID="dd0f369c-16f4-4156-9b96-cef4c4fad7db" containerID="535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d" exitCode=0 Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.766035 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" event={"ID":"dd0f369c-16f4-4156-9b96-cef4c4fad7db","Type":"ContainerDied","Data":"535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.769787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.769823 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.769838 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.769849 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.769860 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.769873 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.784866 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.821720 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.847135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.847201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.847217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.847241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.847256 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:19Z","lastTransitionTime":"2026-01-23T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.867105 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.903974 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.950024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.950068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.950104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.950120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.950131 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:19Z","lastTransitionTime":"2026-01-23T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.955511 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:19 crc kubenswrapper[4760]: I0123 18:01:19.983654 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.023733 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.052562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.052596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.052608 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.052625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.052636 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:20Z","lastTransitionTime":"2026-01-23T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.064620 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.102618 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.144780 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.155367 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.155398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.155431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.155445 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.155455 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:20Z","lastTransitionTime":"2026-01-23T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.186456 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.232009 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.257544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.257865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.257879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.257895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.257906 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:20Z","lastTransitionTime":"2026-01-23T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.261438 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.301108 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.351945 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.360317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.360364 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.360375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.360392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.360417 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:20Z","lastTransitionTime":"2026-01-23T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.387252 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.423792 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.462158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.462189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.462198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.462221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.462231 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:20Z","lastTransitionTime":"2026-01-23T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.525497 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:44:14.782150669 +0000 UTC Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.564852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.564885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.564892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.564907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.564916 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:20Z","lastTransitionTime":"2026-01-23T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.594210 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:20 crc kubenswrapper[4760]: E0123 18:01:20.594328 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.667792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.667839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.667848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.667863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.667876 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:20Z","lastTransitionTime":"2026-01-23T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.770535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.770562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.770572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.770588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.770599 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:20Z","lastTransitionTime":"2026-01-23T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.775183 4760 generic.go:334] "Generic (PLEG): container finished" podID="dd0f369c-16f4-4156-9b96-cef4c4fad7db" containerID="c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666" exitCode=0 Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.775229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" event={"ID":"dd0f369c-16f4-4156-9b96-cef4c4fad7db","Type":"ContainerDied","Data":"c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.791330 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.817426 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.833545 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.848535 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.871938 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.872989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.873023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.873032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.873047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.873058 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:20Z","lastTransitionTime":"2026-01-23T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.885947 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.896188 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.904331 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.915896 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.929916 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.942455 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.951897 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.962125 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.976182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.976223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.976230 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.976243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.976252 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:20Z","lastTransitionTime":"2026-01-23T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:20 crc kubenswrapper[4760]: I0123 18:01:20.983109 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.020585 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:21Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.079043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.079115 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.079138 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.079165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.079186 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:21Z","lastTransitionTime":"2026-01-23T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.181596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.181645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.181656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.181674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.181683 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:21Z","lastTransitionTime":"2026-01-23T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.284036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.284074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.284084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.284098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.284108 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:21Z","lastTransitionTime":"2026-01-23T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.386755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.386801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.386812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.386831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.386842 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:21Z","lastTransitionTime":"2026-01-23T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.488999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.489036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.489047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.489062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.489074 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:21Z","lastTransitionTime":"2026-01-23T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.526514 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:58:54.580045649 +0000 UTC Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.591324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.591370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.591382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.591398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.591423 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:21Z","lastTransitionTime":"2026-01-23T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.594822 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.594874 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:21 crc kubenswrapper[4760]: E0123 18:01:21.594958 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:21 crc kubenswrapper[4760]: E0123 18:01:21.595108 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.693465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.693504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.693515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.693531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.693542 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:21Z","lastTransitionTime":"2026-01-23T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.780316 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" event={"ID":"dd0f369c-16f4-4156-9b96-cef4c4fad7db","Type":"ContainerStarted","Data":"b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5"} Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.796019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.796050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.796058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.796071 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.796080 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:21Z","lastTransitionTime":"2026-01-23T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.899050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.899097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.899108 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.899128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:21 crc kubenswrapper[4760]: I0123 18:01:21.899140 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:21Z","lastTransitionTime":"2026-01-23T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.001726 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.001776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.001792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.001814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.001830 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:22Z","lastTransitionTime":"2026-01-23T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.104213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.104250 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.104263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.104282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.104293 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:22Z","lastTransitionTime":"2026-01-23T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.206608 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.206644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.206654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.206669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.206679 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:22Z","lastTransitionTime":"2026-01-23T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.309118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.309182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.309192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.309206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.309217 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:22Z","lastTransitionTime":"2026-01-23T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.411924 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.411960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.411968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.411984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.411994 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:22Z","lastTransitionTime":"2026-01-23T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.514304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.514353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.514374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.514392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.514440 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:22Z","lastTransitionTime":"2026-01-23T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.527003 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:02:15.70056105 +0000 UTC Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.594955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:22 crc kubenswrapper[4760]: E0123 18:01:22.595087 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.616933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.616964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.616972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.616986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.616995 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:22Z","lastTransitionTime":"2026-01-23T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.719387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.719465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.719478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.719627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.719671 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:22Z","lastTransitionTime":"2026-01-23T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.787011 4760 generic.go:334] "Generic (PLEG): container finished" podID="dd0f369c-16f4-4156-9b96-cef4c4fad7db" containerID="b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5" exitCode=0 Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.787075 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" event={"ID":"dd0f369c-16f4-4156-9b96-cef4c4fad7db","Type":"ContainerDied","Data":"b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.791776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.800286 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.820009 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.822096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.822136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.822149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.822165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.822179 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:22Z","lastTransitionTime":"2026-01-23T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.834442 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.847315 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.855957 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.866196 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.880865 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.893530 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.904424 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.913703 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.925757 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.926076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.926111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.926123 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.926139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.926150 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:22Z","lastTransitionTime":"2026-01-23T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.937199 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.949175 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.967471 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:22 crc kubenswrapper[4760]: I0123 18:01:22.980235 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.028857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.028903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.028916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.028934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.028949 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:23Z","lastTransitionTime":"2026-01-23T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.131644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.131677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.131685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.131699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.131708 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:23Z","lastTransitionTime":"2026-01-23T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.142159 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.142291 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.142331 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.142357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142401 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:01:31.142362617 +0000 UTC m=+34.144820570 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142454 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142496 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.142495 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142507 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:31.142492881 +0000 UTC m=+34.144950814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142516 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142582 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142601 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142630 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:31.142619315 +0000 UTC m=+34.145077288 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142659 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:31.142646515 +0000 UTC m=+34.145104528 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142832 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142844 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142855 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.142897 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:31.142887262 +0000 UTC m=+34.145345195 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.234623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.234671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.234684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.234698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.234707 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:23Z","lastTransitionTime":"2026-01-23T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.337218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.337276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.337287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.337303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.337318 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:23Z","lastTransitionTime":"2026-01-23T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.439744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.439778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.439789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.439801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.439811 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:23Z","lastTransitionTime":"2026-01-23T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.527996 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:41:17.916290827 +0000 UTC Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.542162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.542203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.542214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.542232 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.542243 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:23Z","lastTransitionTime":"2026-01-23T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.595210 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.595276 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.595349 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:23 crc kubenswrapper[4760]: E0123 18:01:23.595452 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.644081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.644121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.644129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.644142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.644150 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:23Z","lastTransitionTime":"2026-01-23T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.746014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.746056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.746068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.746086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.746097 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:23Z","lastTransitionTime":"2026-01-23T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.796992 4760 generic.go:334] "Generic (PLEG): container finished" podID="dd0f369c-16f4-4156-9b96-cef4c4fad7db" containerID="a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001" exitCode=0 Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.797037 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" event={"ID":"dd0f369c-16f4-4156-9b96-cef4c4fad7db","Type":"ContainerDied","Data":"a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.813516 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.831724 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.848503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.848550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.848562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.848579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.848593 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:23Z","lastTransitionTime":"2026-01-23T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.857629 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.871588 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.885563 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.900501 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.913156 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.923982 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.940922 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.951691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.951731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.951741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.951755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.951768 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:23Z","lastTransitionTime":"2026-01-23T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.956677 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.969073 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:23 crc kubenswrapper[4760]: I0123 18:01:23.985331 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.006719 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.020947 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.039103 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.053948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.053994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.054006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.054025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.054040 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:24Z","lastTransitionTime":"2026-01-23T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.157326 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.157377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.157389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.157430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.157447 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:24Z","lastTransitionTime":"2026-01-23T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.260037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.260079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.260090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.260105 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.260115 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:24Z","lastTransitionTime":"2026-01-23T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.362012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.362048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.362059 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.362075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.362086 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:24Z","lastTransitionTime":"2026-01-23T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.464561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.464601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.464611 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.464626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.464636 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:24Z","lastTransitionTime":"2026-01-23T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.528779 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 00:23:38.497743194 +0000 UTC Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.566671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.566708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.566718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.566731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.566741 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:24Z","lastTransitionTime":"2026-01-23T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.594149 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:24 crc kubenswrapper[4760]: E0123 18:01:24.594262 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.669230 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.669260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.669269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.669281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.669289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:24Z","lastTransitionTime":"2026-01-23T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.771506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.771540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.771551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.771565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.771575 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:24Z","lastTransitionTime":"2026-01-23T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.803779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" event={"ID":"dd0f369c-16f4-4156-9b96-cef4c4fad7db","Type":"ContainerStarted","Data":"d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.808753 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.809239 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.809362 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.823936 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.829701 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.838874 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.854948 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.869914 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.873395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.873444 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.873453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.873468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.873481 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:24Z","lastTransitionTime":"2026-01-23T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.882012 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.893494 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.903300 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.913661 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.924826 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.935354 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.942869 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.952937 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.973165 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.975849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.975896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.975910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.975927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.975939 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:24Z","lastTransitionTime":"2026-01-23T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.986646 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:24 crc kubenswrapper[4760]: I0123 18:01:24.998716 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.014701 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.028940 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.047346 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.060593 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.071220 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.078754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.078790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.078798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.078812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.078821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:25Z","lastTransitionTime":"2026-01-23T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.081652 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.093312 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.106317 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.121427 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.134148 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.143871 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.154846 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.173917 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.181159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.181198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.181207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.181222 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.181231 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:25Z","lastTransitionTime":"2026-01-23T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.188135 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.201844 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.283778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.283987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.284049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.284109 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.284164 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:25Z","lastTransitionTime":"2026-01-23T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.386164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.386243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.386256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.386273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.386285 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:25Z","lastTransitionTime":"2026-01-23T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.488021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.488060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.488069 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.488080 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.488088 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:25Z","lastTransitionTime":"2026-01-23T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.529470 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:50:17.406960817 +0000 UTC Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.590543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.590583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.590591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.590606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.590615 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:25Z","lastTransitionTime":"2026-01-23T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.594810 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.594866 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:25 crc kubenswrapper[4760]: E0123 18:01:25.594974 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:25 crc kubenswrapper[4760]: E0123 18:01:25.595014 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.693588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.693636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.693647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.693663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.693673 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:25Z","lastTransitionTime":"2026-01-23T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.796159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.796205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.796216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.796233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.796244 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:25Z","lastTransitionTime":"2026-01-23T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.815233 4760 generic.go:334] "Generic (PLEG): container finished" podID="dd0f369c-16f4-4156-9b96-cef4c4fad7db" containerID="d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8" exitCode=0 Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.815356 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" event={"ID":"dd0f369c-16f4-4156-9b96-cef4c4fad7db","Type":"ContainerDied","Data":"d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8"} Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.816042 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.835434 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.839296 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.849389 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.861191 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.874120 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.889314 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.898429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.898485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.898497 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.898512 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.898528 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:25Z","lastTransitionTime":"2026-01-23T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.905033 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.915349 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.927837 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.938524 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.947954 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.963094 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.973439 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.983886 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:25 crc kubenswrapper[4760]: I0123 18:01:25.993199 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.001652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.001687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.001700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.001715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.001726 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:26Z","lastTransitionTime":"2026-01-23T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.005371 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.023441 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.034259 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.045292 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.057845 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.069244 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.086902 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.098030 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.103187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.103226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.103235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.103249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.103259 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:26Z","lastTransitionTime":"2026-01-23T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.110128 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.121759 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.133802 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.146948 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.162196 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.174573 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.184002 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.194632 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.205145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.205223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.205236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.205254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.205267 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:26Z","lastTransitionTime":"2026-01-23T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.307706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.307751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.307763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.307783 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.307795 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:26Z","lastTransitionTime":"2026-01-23T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.409992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.410040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.410051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.410068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.410079 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:26Z","lastTransitionTime":"2026-01-23T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.512509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.512548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.512557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.512572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.512582 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:26Z","lastTransitionTime":"2026-01-23T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.529903 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:16:27.324749897 +0000 UTC Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.594577 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:26 crc kubenswrapper[4760]: E0123 18:01:26.594687 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.614666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.614700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.614709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.614723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.614732 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:26Z","lastTransitionTime":"2026-01-23T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.717260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.717297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.717308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.717323 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.717334 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:26Z","lastTransitionTime":"2026-01-23T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.819875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.819925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.819945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.820062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.820083 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:26Z","lastTransitionTime":"2026-01-23T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.826514 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" event={"ID":"dd0f369c-16f4-4156-9b96-cef4c4fad7db","Type":"ContainerStarted","Data":"e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.841780 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.863615 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.877966 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.889229 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.898946 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.909746 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.922100 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.922146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.922158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.922174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.922185 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:26Z","lastTransitionTime":"2026-01-23T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.923311 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.935546 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.948256 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.959051 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.973873 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:26 crc kubenswrapper[4760]: I0123 18:01:26.987341 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.000985 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.017644 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.025370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.025766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.025778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.025796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.025809 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:27Z","lastTransitionTime":"2026-01-23T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.030949 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.128690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.128741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.128753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.128771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.128782 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:27Z","lastTransitionTime":"2026-01-23T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.231699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.231746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.231759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.231775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.231788 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:27Z","lastTransitionTime":"2026-01-23T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.333742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.333810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.333822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.333839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.333851 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:27Z","lastTransitionTime":"2026-01-23T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.435948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.436006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.436023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.436048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.436065 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:27Z","lastTransitionTime":"2026-01-23T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.531093 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:38:08.505625697 +0000 UTC Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.538470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.538532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.538551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.538575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.538592 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:27Z","lastTransitionTime":"2026-01-23T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.594400 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.594488 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:27 crc kubenswrapper[4760]: E0123 18:01:27.594568 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:27 crc kubenswrapper[4760]: E0123 18:01:27.594625 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.608894 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.628351 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.640101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.640137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.640148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.640175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.640234 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:27Z","lastTransitionTime":"2026-01-23T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.656969 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.672500 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.687291 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.704147 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.717251 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.733923 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.742300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.742520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.742625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.742710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.742780 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:27Z","lastTransitionTime":"2026-01-23T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.762815 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.795453 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.809821 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.823512 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.830136 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/0.log" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.833109 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c" exitCode=1 Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.833165 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.833782 4760 scope.go:117] "RemoveContainer" containerID="4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.845907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.845943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.845954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.845972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.845983 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:27Z","lastTransitionTime":"2026-01-23T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.849166 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.865782 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.880305 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.899176 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.914608 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.933860 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"message\\\":\\\".io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.958692 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:01:26.958578 6082 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.961203 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:01:26.961261 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:01:26.961267 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:01:26.961280 6082 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:01:26.961285 6082 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:01:26.961303 6082 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:01:26.961317 6082 factory.go:656] Stopping watch factory\\\\nI0123 18:01:26.961330 6082 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:01:26.961366 6082 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:01:26.961377 6082 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:01:26.961371 6082 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:01:26.961388 6082 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:01:26.961390 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.945891 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.948649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.948689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.948701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.948717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.948728 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:27Z","lastTransitionTime":"2026-01-23T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.959057 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.973566 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:27 crc kubenswrapper[4760]: I0123 18:01:27.985516 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.003678 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.019691 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.032486 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.048761 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.050337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.050380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.050392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.050445 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.050461 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.061859 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.083117 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.098185 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.114014 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.152848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.152884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.152896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.152912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.152924 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.256884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.256922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.256932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.256946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.256955 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.359172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.359216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.359227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.359242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.359254 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.461551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.461586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.461594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.461606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.461615 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.531771 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:32:40.799798629 +0000 UTC Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.564999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.565056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.565072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.565096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.565121 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.594479 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:28 crc kubenswrapper[4760]: E0123 18:01:28.594619 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.662651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.662702 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.662718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.662737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.662751 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: E0123 18:01:28.676016 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.681330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.681363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.681371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.681385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.681395 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: E0123 18:01:28.691274 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.694247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.694281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.694290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.694303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.694312 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: E0123 18:01:28.705012 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.708205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.708232 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.708242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.708256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.708266 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: E0123 18:01:28.722832 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.726028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.726055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.726064 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.726079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.726089 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: E0123 18:01:28.735628 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: E0123 18:01:28.735740 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.736919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.736952 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.736961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.736973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.736984 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.838359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.838390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.838398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.838429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.838442 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.839362 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/0.log" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.843129 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed"} Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.843588 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.875985 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.895239 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.913610 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.935663 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.940880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.940952 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.940971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.940994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.941010 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:28Z","lastTransitionTime":"2026-01-23T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.960354 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:28 crc kubenswrapper[4760]: I0123 18:01:28.994061 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"message\\\":\\\".io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.958692 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:01:26.958578 6082 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.961203 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:01:26.961261 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:01:26.961267 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:01:26.961280 6082 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:01:26.961285 6082 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:01:26.961303 6082 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:01:26.961317 6082 factory.go:656] Stopping watch factory\\\\nI0123 18:01:26.961330 6082 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:01:26.961366 6082 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:01:26.961377 6082 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:01:26.961371 6082 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:01:26.961388 6082 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:01:26.961390 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.010156 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.027103 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.041397 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.043402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.043456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.043470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.043486 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.043497 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:29Z","lastTransitionTime":"2026-01-23T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.055961 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.067691 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.089651 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.103011 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.112392 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.123855 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.145734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.145798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.145815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.145837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.145854 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:29Z","lastTransitionTime":"2026-01-23T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.248271 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.248348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.248369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.248398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.248451 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:29Z","lastTransitionTime":"2026-01-23T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.351159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.351221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.351240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.351266 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.351285 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:29Z","lastTransitionTime":"2026-01-23T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.455543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.455613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.455630 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.455651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.455672 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:29Z","lastTransitionTime":"2026-01-23T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.532600 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:48:44.506142817 +0000 UTC Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.559313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.559387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.559477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.559513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.559540 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:29Z","lastTransitionTime":"2026-01-23T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.595081 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.595215 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:29 crc kubenswrapper[4760]: E0123 18:01:29.595332 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:29 crc kubenswrapper[4760]: E0123 18:01:29.595882 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.662744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.662785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.662797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.662814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.662827 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:29Z","lastTransitionTime":"2026-01-23T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.710700 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v"] Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.711444 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.714472 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.715242 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.745330 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.763337 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.765703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.765759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.765778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.765804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.765824 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:29Z","lastTransitionTime":"2026-01-23T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.780672 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.803477 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f1f91ba-af52-43da-9fe4-d146e9ccb228-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.803306 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.803633 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f1f91ba-af52-43da-9fe4-d146e9ccb228-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.803705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g76sp\" (UniqueName: \"kubernetes.io/projected/7f1f91ba-af52-43da-9fe4-d146e9ccb228-kube-api-access-g76sp\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.803815 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f1f91ba-af52-43da-9fe4-d146e9ccb228-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.821848 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.860213 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"message\\\":\\\".io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.958692 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:01:26.958578 6082 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.961203 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:01:26.961261 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:01:26.961267 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:01:26.961280 6082 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:01:26.961285 6082 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:01:26.961303 6082 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:01:26.961317 6082 factory.go:656] Stopping watch factory\\\\nI0123 18:01:26.961330 6082 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:01:26.961366 6082 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:01:26.961377 6082 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:01:26.961371 6082 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:01:26.961388 6082 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:01:26.961390 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.868519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.868573 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.868588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.868609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.868627 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:29Z","lastTransitionTime":"2026-01-23T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.882276 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.898036 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.904881 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f1f91ba-af52-43da-9fe4-d146e9ccb228-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.904940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g76sp\" (UniqueName: \"kubernetes.io/projected/7f1f91ba-af52-43da-9fe4-d146e9ccb228-kube-api-access-g76sp\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.904985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f1f91ba-af52-43da-9fe4-d146e9ccb228-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.905006 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f1f91ba-af52-43da-9fe4-d146e9ccb228-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.905678 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f1f91ba-af52-43da-9fe4-d146e9ccb228-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.905680 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f1f91ba-af52-43da-9fe4-d146e9ccb228-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.910571 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f1f91ba-af52-43da-9fe4-d146e9ccb228-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.918452 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.922486 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g76sp\" (UniqueName: \"kubernetes.io/projected/7f1f91ba-af52-43da-9fe4-d146e9ccb228-kube-api-access-g76sp\") pod \"ovnkube-control-plane-749d76644c-f296v\" (UID: \"7f1f91ba-af52-43da-9fe4-d146e9ccb228\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.930469 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.946132 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.961445 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.971680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.971728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.971746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.971770 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.971789 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:29Z","lastTransitionTime":"2026-01-23T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.977549 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:29 crc kubenswrapper[4760]: I0123 18:01:29.991715 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:29Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.008014 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.024289 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.028575 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" Jan 23 18:01:30 crc kubenswrapper[4760]: W0123 18:01:30.040571 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f1f91ba_af52_43da_9fe4_d146e9ccb228.slice/crio-47068e282d8b1a800813c4edc35e752acb3a2b1b43329b7595568e3b8bb63914 WatchSource:0}: Error finding container 47068e282d8b1a800813c4edc35e752acb3a2b1b43329b7595568e3b8bb63914: Status 404 returned error can't find the container with id 47068e282d8b1a800813c4edc35e752acb3a2b1b43329b7595568e3b8bb63914 Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.074598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.074652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.074666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.074688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.074703 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:30Z","lastTransitionTime":"2026-01-23T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.177050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.177087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.177098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.177112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.177123 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:30Z","lastTransitionTime":"2026-01-23T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.279213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.279251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.279260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.279276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.279285 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:30Z","lastTransitionTime":"2026-01-23T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.382050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.382099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.382110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.382125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.382138 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:30Z","lastTransitionTime":"2026-01-23T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.485653 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.485715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.485732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.485753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.485768 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:30Z","lastTransitionTime":"2026-01-23T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.533126 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:53:24.420091257 +0000 UTC Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.588342 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.588374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.588383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.588397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.588428 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:30Z","lastTransitionTime":"2026-01-23T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.594815 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:30 crc kubenswrapper[4760]: E0123 18:01:30.594952 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.690876 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.691156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.691165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.691178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.691188 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:30Z","lastTransitionTime":"2026-01-23T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.788127 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-sw8p8"] Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.788591 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:30 crc kubenswrapper[4760]: E0123 18:01:30.788650 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.792833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.792863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.792873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.792887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.792900 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:30Z","lastTransitionTime":"2026-01-23T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.803019 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.812792 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.824231 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.833799 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.849912 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/1.log" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.850636 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/0.log" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.853461 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed" exitCode=1 Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.853495 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.853525 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.853737 4760 scope.go:117] "RemoveContainer" containerID="4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.854361 4760 scope.go:117] "RemoveContainer" containerID="af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed" Jan 23 18:01:30 crc kubenswrapper[4760]: E0123 18:01:30.854753 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.855063 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" event={"ID":"7f1f91ba-af52-43da-9fe4-d146e9ccb228","Type":"ContainerStarted","Data":"24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.855103 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" event={"ID":"7f1f91ba-af52-43da-9fe4-d146e9ccb228","Type":"ContainerStarted","Data":"47068e282d8b1a800813c4edc35e752acb3a2b1b43329b7595568e3b8bb63914"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.866423 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.878652 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.894809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.894849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.894859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.894874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.894887 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:30Z","lastTransitionTime":"2026-01-23T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.898643 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.910343 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.914261 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.914316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw95f\" (UniqueName: \"kubernetes.io/projected/009bf3d0-1239-4b72-8a29-8b5e5964bdac-kube-api-access-qw95f\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.928379 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"message\\\":\\\".io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.958692 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:01:26.958578 6082 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.961203 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:01:26.961261 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:01:26.961267 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:01:26.961280 6082 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:01:26.961285 6082 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:01:26.961303 6082 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:01:26.961317 6082 factory.go:656] Stopping watch factory\\\\nI0123 18:01:26.961330 6082 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:01:26.961366 6082 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:01:26.961377 6082 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:01:26.961371 6082 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:01:26.961388 6082 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:01:26.961390 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.937905 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.947470 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.956189 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.975529 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.989576 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.997327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.997445 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.997514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.997589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:30 crc kubenswrapper[4760]: I0123 18:01:30.997658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:30Z","lastTransitionTime":"2026-01-23T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.005540 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.015318 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw95f\" (UniqueName: \"kubernetes.io/projected/009bf3d0-1239-4b72-8a29-8b5e5964bdac-kube-api-access-qw95f\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.015555 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.015869 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.016762 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs podName:009bf3d0-1239-4b72-8a29-8b5e5964bdac nodeName:}" failed. No retries permitted until 2026-01-23 18:01:31.516327448 +0000 UTC m=+34.518785491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs") pod "network-metrics-daemon-sw8p8" (UID: "009bf3d0-1239-4b72-8a29-8b5e5964bdac") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.019479 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.033598 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw95f\" (UniqueName: \"kubernetes.io/projected/009bf3d0-1239-4b72-8a29-8b5e5964bdac-kube-api-access-qw95f\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.039225 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.052100 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.068469 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"message\\\":\\\".io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.958692 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:01:26.958578 6082 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.961203 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:01:26.961261 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:01:26.961267 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:01:26.961280 6082 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:01:26.961285 6082 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:01:26.961303 6082 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:01:26.961317 6082 factory.go:656] Stopping watch factory\\\\nI0123 18:01:26.961330 6082 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:01:26.961366 6082 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:01:26.961377 6082 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:01:26.961371 6082 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:01:26.961388 6082 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:01:26.961390 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z]\\\\nI0123 18:01:28.550350 6239 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Tem\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.078903 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.100176 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.100527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.100589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.100598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.100613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.100623 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:31Z","lastTransitionTime":"2026-01-23T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.112080 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.121789 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.134446 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.147305 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.160087 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.171233 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.179859 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.189803 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.200491 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.202840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.202899 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.202908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.202921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.202930 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:31Z","lastTransitionTime":"2026-01-23T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.217717 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.217811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.217833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.217892 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:01:47.217865628 +0000 UTC m=+50.220323561 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.217908 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.217919 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.217934 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.217944 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.217969 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:47.21795434 +0000 UTC m=+50.220412273 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.217992 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.218032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.218071 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:47.218061934 +0000 UTC m=+50.220519867 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.218082 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.218103 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.218114 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.218123 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:47.218115555 +0000 UTC m=+50.220573488 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.218123 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.218155 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:01:47.218146937 +0000 UTC m=+50.220605000 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.221807 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.234353 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.244272 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.305123 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.305159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.305168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.305183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.305193 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:31Z","lastTransitionTime":"2026-01-23T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.407853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.407893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.407927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.407943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.407952 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:31Z","lastTransitionTime":"2026-01-23T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.510701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.510769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.510792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.510822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.510846 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:31Z","lastTransitionTime":"2026-01-23T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.520228 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.520487 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.520607 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs podName:009bf3d0-1239-4b72-8a29-8b5e5964bdac nodeName:}" failed. No retries permitted until 2026-01-23 18:01:32.520579909 +0000 UTC m=+35.523037932 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs") pod "network-metrics-daemon-sw8p8" (UID: "009bf3d0-1239-4b72-8a29-8b5e5964bdac") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.533463 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:48:42.77808116 +0000 UTC Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.553781 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.571808 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.587219 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.594360 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.594383 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.594510 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:31 crc kubenswrapper[4760]: E0123 18:01:31.594587 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.600899 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.612774 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.612837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.612849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.612898 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.612913 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:31Z","lastTransitionTime":"2026-01-23T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.614964 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.630597 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.644961 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.667513 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"message\\\":\\\".io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.958692 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:01:26.958578 6082 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.961203 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:01:26.961261 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:01:26.961267 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:01:26.961280 6082 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:01:26.961285 6082 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:01:26.961303 6082 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:01:26.961317 6082 factory.go:656] Stopping watch factory\\\\nI0123 18:01:26.961330 6082 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:01:26.961366 6082 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:01:26.961377 6082 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:01:26.961371 6082 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:01:26.961388 6082 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:01:26.961390 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z]\\\\nI0123 18:01:28.550350 6239 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Tem\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.683926 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.705690 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.715795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.715846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.715856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.715875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.715890 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:31Z","lastTransitionTime":"2026-01-23T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.722186 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.737523 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.749648 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.763386 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.779451 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.791390 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.802888 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.813026 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.818703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.818740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.818753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.818768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.818779 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:31Z","lastTransitionTime":"2026-01-23T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.859377 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/1.log" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.863750 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" event={"ID":"7f1f91ba-af52-43da-9fe4-d146e9ccb228","Type":"ContainerStarted","Data":"ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.876885 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.889939 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.909117 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"message\\\":\\\".io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.958692 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:01:26.958578 6082 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.961203 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:01:26.961261 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:01:26.961267 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:01:26.961280 6082 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:01:26.961285 6082 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:01:26.961303 6082 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:01:26.961317 6082 factory.go:656] Stopping watch factory\\\\nI0123 18:01:26.961330 6082 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:01:26.961366 6082 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:01:26.961377 6082 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:01:26.961371 6082 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:01:26.961388 6082 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:01:26.961390 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z]\\\\nI0123 18:01:28.550350 6239 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Tem\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.918845 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.921454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.921513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.921529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.921661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.921710 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:31Z","lastTransitionTime":"2026-01-23T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.930188 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.942426 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.953762 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.965270 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.981568 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:31 crc kubenswrapper[4760]: I0123 18:01:31.993991 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.004598 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:32Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.019508 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:32Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.024056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.024104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.024113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.024127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.024136 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:32Z","lastTransitionTime":"2026-01-23T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.033038 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:32Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.044482 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:32Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.055860 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:32Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.066590 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:32Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.086564 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:32Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.126356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.126420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.126438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.126455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.126464 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:32Z","lastTransitionTime":"2026-01-23T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.228882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.228930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.228942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.228960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.228973 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:32Z","lastTransitionTime":"2026-01-23T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.331836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.331915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.331940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.331968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.331989 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:32Z","lastTransitionTime":"2026-01-23T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.434533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.434588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.434603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.434621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.434634 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:32Z","lastTransitionTime":"2026-01-23T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.529704 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:32 crc kubenswrapper[4760]: E0123 18:01:32.529855 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:32 crc kubenswrapper[4760]: E0123 18:01:32.529915 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs podName:009bf3d0-1239-4b72-8a29-8b5e5964bdac nodeName:}" failed. No retries permitted until 2026-01-23 18:01:34.529897084 +0000 UTC m=+37.532355017 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs") pod "network-metrics-daemon-sw8p8" (UID: "009bf3d0-1239-4b72-8a29-8b5e5964bdac") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.534542 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:55:31.675712764 +0000 UTC Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.537167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.537220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.537235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.537253 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.537265 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:32Z","lastTransitionTime":"2026-01-23T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.595038 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.595126 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:32 crc kubenswrapper[4760]: E0123 18:01:32.595164 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:32 crc kubenswrapper[4760]: E0123 18:01:32.595203 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.640772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.640836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.640849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.640865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.640876 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:32Z","lastTransitionTime":"2026-01-23T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.743626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.743697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.743711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.743730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.743741 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:32Z","lastTransitionTime":"2026-01-23T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.846434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.846689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.846755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.846813 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.846874 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:32Z","lastTransitionTime":"2026-01-23T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.949533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.949575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.949589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.949607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:32 crc kubenswrapper[4760]: I0123 18:01:32.949622 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:32Z","lastTransitionTime":"2026-01-23T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.054630 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.054688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.054704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.054729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.054761 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:33Z","lastTransitionTime":"2026-01-23T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.158656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.158712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.158722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.158737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.158746 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:33Z","lastTransitionTime":"2026-01-23T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.261127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.261198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.261221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.261250 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.261269 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:33Z","lastTransitionTime":"2026-01-23T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.363777 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.363823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.363834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.363852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.363864 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:33Z","lastTransitionTime":"2026-01-23T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.466928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.466966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.466976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.466990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.467000 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:33Z","lastTransitionTime":"2026-01-23T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.535282 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:18:13.988141551 +0000 UTC Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.569277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.569313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.569325 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.569341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.569352 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:33Z","lastTransitionTime":"2026-01-23T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.594909 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.594910 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:33 crc kubenswrapper[4760]: E0123 18:01:33.595028 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:33 crc kubenswrapper[4760]: E0123 18:01:33.595133 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.672078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.672158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.672177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.672202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.672220 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:33Z","lastTransitionTime":"2026-01-23T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.775026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.775076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.775087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.775120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.775134 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:33Z","lastTransitionTime":"2026-01-23T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.877940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.877992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.878004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.878021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.878034 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:33Z","lastTransitionTime":"2026-01-23T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.980721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.980781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.980792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.980814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:33 crc kubenswrapper[4760]: I0123 18:01:33.980827 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:33Z","lastTransitionTime":"2026-01-23T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.083403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.083505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.083521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.083538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.083549 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:34Z","lastTransitionTime":"2026-01-23T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.185974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.186011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.186019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.186032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.186040 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:34Z","lastTransitionTime":"2026-01-23T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.288872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.288924 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.288935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.288959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.288971 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:34Z","lastTransitionTime":"2026-01-23T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.391043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.391111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.391128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.391146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.391159 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:34Z","lastTransitionTime":"2026-01-23T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.493669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.493704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.493713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.493726 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.493736 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:34Z","lastTransitionTime":"2026-01-23T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.536362 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:20:41.276669886 +0000 UTC Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.553220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:34 crc kubenswrapper[4760]: E0123 18:01:34.553390 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:34 crc kubenswrapper[4760]: E0123 18:01:34.553487 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs podName:009bf3d0-1239-4b72-8a29-8b5e5964bdac nodeName:}" failed. No retries permitted until 2026-01-23 18:01:38.553468775 +0000 UTC m=+41.555926708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs") pod "network-metrics-daemon-sw8p8" (UID: "009bf3d0-1239-4b72-8a29-8b5e5964bdac") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.594522 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:34 crc kubenswrapper[4760]: E0123 18:01:34.594671 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.594537 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:34 crc kubenswrapper[4760]: E0123 18:01:34.594910 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.597271 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.597332 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.597348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.597374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.597390 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:34Z","lastTransitionTime":"2026-01-23T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.700784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.700844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.700858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.700881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.700896 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:34Z","lastTransitionTime":"2026-01-23T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.803895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.803949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.803976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.803994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.804005 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:34Z","lastTransitionTime":"2026-01-23T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.906234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.906289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.906302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.906320 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:34 crc kubenswrapper[4760]: I0123 18:01:34.906344 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:34Z","lastTransitionTime":"2026-01-23T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.008748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.008864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.008893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.008922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.008944 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:35Z","lastTransitionTime":"2026-01-23T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.112141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.112183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.112192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.112208 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.112220 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:35Z","lastTransitionTime":"2026-01-23T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.215391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.215467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.215479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.215497 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.215512 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:35Z","lastTransitionTime":"2026-01-23T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.318428 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.318469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.318477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.318491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.318501 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:35Z","lastTransitionTime":"2026-01-23T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.421277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.421320 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.421328 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.421343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.421353 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:35Z","lastTransitionTime":"2026-01-23T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.524671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.524730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.524745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.524762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.524775 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:35Z","lastTransitionTime":"2026-01-23T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.537154 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:56:11.466555269 +0000 UTC Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.595279 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.595379 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:35 crc kubenswrapper[4760]: E0123 18:01:35.595493 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:35 crc kubenswrapper[4760]: E0123 18:01:35.595641 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.627390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.627443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.627456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.627471 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.627480 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:35Z","lastTransitionTime":"2026-01-23T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.730989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.731039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.731051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.731070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.731082 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:35Z","lastTransitionTime":"2026-01-23T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.835268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.835330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.835339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.835364 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.835378 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:35Z","lastTransitionTime":"2026-01-23T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.937933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.938191 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.938339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.938467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:35 crc kubenswrapper[4760]: I0123 18:01:35.938556 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:35Z","lastTransitionTime":"2026-01-23T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.042120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.042166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.042177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.042197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.042212 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:36Z","lastTransitionTime":"2026-01-23T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.145074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.145110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.145118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.145134 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.145144 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:36Z","lastTransitionTime":"2026-01-23T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.247893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.247945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.247959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.247976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.247986 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:36Z","lastTransitionTime":"2026-01-23T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.351024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.351064 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.351072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.351084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.351093 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:36Z","lastTransitionTime":"2026-01-23T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.452608 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.452654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.452662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.452677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.452686 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:36Z","lastTransitionTime":"2026-01-23T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.537998 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:19:42.658630118 +0000 UTC Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.555184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.555222 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.555234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.555249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.555263 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:36Z","lastTransitionTime":"2026-01-23T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.595097 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.595269 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:36 crc kubenswrapper[4760]: E0123 18:01:36.595386 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:36 crc kubenswrapper[4760]: E0123 18:01:36.595597 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.657999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.658035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.658044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.658061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.658070 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:36Z","lastTransitionTime":"2026-01-23T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.760469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.760533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.760547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.760563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.760575 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:36Z","lastTransitionTime":"2026-01-23T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.863225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.863261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.863277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.863293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.863303 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:36Z","lastTransitionTime":"2026-01-23T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.965543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.965602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.965619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.965642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:36 crc kubenswrapper[4760]: I0123 18:01:36.965660 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:36Z","lastTransitionTime":"2026-01-23T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.068728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.068801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.068835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.068863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.068888 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:37Z","lastTransitionTime":"2026-01-23T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.171175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.171220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.171230 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.171245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.171256 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:37Z","lastTransitionTime":"2026-01-23T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.274135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.274201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.274213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.274247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.274266 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:37Z","lastTransitionTime":"2026-01-23T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.377137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.377211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.377226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.377252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.377268 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:37Z","lastTransitionTime":"2026-01-23T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.480952 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.481018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.481030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.481050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.481062 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:37Z","lastTransitionTime":"2026-01-23T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.538133 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:26:14.658730939 +0000 UTC Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.583459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.583495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.583507 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.583524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.583537 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:37Z","lastTransitionTime":"2026-01-23T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.595243 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.595354 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:37 crc kubenswrapper[4760]: E0123 18:01:37.595534 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:37 crc kubenswrapper[4760]: E0123 18:01:37.595649 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.615707 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.630212 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.649091 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.660608 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.677463 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.686275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.686368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.686396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.686450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.686468 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:37Z","lastTransitionTime":"2026-01-23T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.688070 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.699697 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.709554 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.721139 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.731326 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.750619 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.763278 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.775477 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.789317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.789350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.789361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.789373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.789382 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:37Z","lastTransitionTime":"2026-01-23T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.790449 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.809083 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.825819 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.843023 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d4132054593a452577d0a25f259d3839c1a51f65f98fd22eba3ccd81942db2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"message\\\":\\\".io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.958692 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:01:26.958578 6082 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 18:01:26.961203 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:01:26.961261 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:01:26.961267 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:01:26.961280 6082 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:01:26.961285 6082 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:01:26.961303 6082 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:01:26.961317 6082 factory.go:656] Stopping watch factory\\\\nI0123 18:01:26.961330 6082 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:01:26.961366 6082 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:01:26.961377 6082 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:01:26.961371 6082 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:01:26.961388 6082 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:01:26.961390 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z]\\\\nI0123 18:01:28.550350 6239 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Tem\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.891455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.891493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.891504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.891517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.891553 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:37Z","lastTransitionTime":"2026-01-23T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.993840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.993879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.993890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.993935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:37 crc kubenswrapper[4760]: I0123 18:01:37.993947 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:37Z","lastTransitionTime":"2026-01-23T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.096201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.096235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.096244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.096259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.096271 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:38Z","lastTransitionTime":"2026-01-23T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.206447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.206494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.206503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.206517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.206526 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:38Z","lastTransitionTime":"2026-01-23T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.309080 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.309379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.309506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.309613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.309703 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:38Z","lastTransitionTime":"2026-01-23T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.411997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.412071 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.412093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.412122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.412215 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:38Z","lastTransitionTime":"2026-01-23T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.515319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.515395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.515440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.515462 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.515471 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:38Z","lastTransitionTime":"2026-01-23T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.538471 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:17:58.734286986 +0000 UTC Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.593506 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:38 crc kubenswrapper[4760]: E0123 18:01:38.593733 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:38 crc kubenswrapper[4760]: E0123 18:01:38.593818 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs podName:009bf3d0-1239-4b72-8a29-8b5e5964bdac nodeName:}" failed. No retries permitted until 2026-01-23 18:01:46.593800061 +0000 UTC m=+49.596257994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs") pod "network-metrics-daemon-sw8p8" (UID: "009bf3d0-1239-4b72-8a29-8b5e5964bdac") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.594171 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.594213 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:38 crc kubenswrapper[4760]: E0123 18:01:38.594332 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:38 crc kubenswrapper[4760]: E0123 18:01:38.594517 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.617393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.617543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.617559 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.617576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.617587 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:38Z","lastTransitionTime":"2026-01-23T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.720282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.720321 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.720332 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.720347 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.720356 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:38Z","lastTransitionTime":"2026-01-23T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.823757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.823984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.824077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.824147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.824206 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:38Z","lastTransitionTime":"2026-01-23T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.927724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.927769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.927780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.927797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:38 crc kubenswrapper[4760]: I0123 18:01:38.927809 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:38Z","lastTransitionTime":"2026-01-23T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.030390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.030456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.030469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.030484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.030495 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.101888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.101933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.101945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.101963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.101975 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: E0123 18:01:39.114618 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:39Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.118734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.118768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.118780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.118798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.118809 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: E0123 18:01:39.131605 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:39Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.135498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.135546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.135563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.135623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.135644 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: E0123 18:01:39.148787 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:39Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.152137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.152189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.152205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.152228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.152244 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: E0123 18:01:39.163555 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:39Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.166789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.166828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.166839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.166854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.166867 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: E0123 18:01:39.180269 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:39Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:39 crc kubenswrapper[4760]: E0123 18:01:39.180409 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.182044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.182082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.182094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.182112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.182123 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.284769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.284838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.284861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.284892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.284917 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.388385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.388465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.388476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.388493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.389109 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.491574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.491622 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.491634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.491651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.491664 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.539730 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:14:07.865079869 +0000 UTC Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.593956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.593985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.593994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.594008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.594018 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.594647 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:39 crc kubenswrapper[4760]: E0123 18:01:39.594803 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.594858 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:39 crc kubenswrapper[4760]: E0123 18:01:39.595027 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.697363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.697399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.697425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.697439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.697451 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.799745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.799779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.799788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.799800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.799809 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.902311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.902360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.902498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.902582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:39 crc kubenswrapper[4760]: I0123 18:01:39.902611 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:39Z","lastTransitionTime":"2026-01-23T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.006070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.006122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.006133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.006156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.006168 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:40Z","lastTransitionTime":"2026-01-23T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.108971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.109021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.109036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.109057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.109072 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:40Z","lastTransitionTime":"2026-01-23T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.211251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.211294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.211306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.211323 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.211335 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:40Z","lastTransitionTime":"2026-01-23T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.313368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.313464 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.313482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.313508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.313529 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:40Z","lastTransitionTime":"2026-01-23T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.417102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.417182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.417199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.417223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.417239 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:40Z","lastTransitionTime":"2026-01-23T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.520200 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.520265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.520276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.520294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.520305 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:40Z","lastTransitionTime":"2026-01-23T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.540305 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:59:28.178162188 +0000 UTC Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.594731 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:40 crc kubenswrapper[4760]: E0123 18:01:40.595066 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.594739 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:40 crc kubenswrapper[4760]: E0123 18:01:40.595361 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.622160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.622212 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.622227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.622248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.622263 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:40Z","lastTransitionTime":"2026-01-23T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.725003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.725074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.725087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.725139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.725155 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:40Z","lastTransitionTime":"2026-01-23T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.827045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.827104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.827120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.827137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.827149 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:40Z","lastTransitionTime":"2026-01-23T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.929741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.929801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.929815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.929837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:40 crc kubenswrapper[4760]: I0123 18:01:40.929853 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:40Z","lastTransitionTime":"2026-01-23T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.032917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.032980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.033005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.033033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.033054 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:41Z","lastTransitionTime":"2026-01-23T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.135821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.135898 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.135921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.135951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.135972 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:41Z","lastTransitionTime":"2026-01-23T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.238480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.238533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.238549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.238571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.238587 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:41Z","lastTransitionTime":"2026-01-23T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.341654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.341735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.341752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.341775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.341796 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:41Z","lastTransitionTime":"2026-01-23T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.447547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.447642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.447669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.447704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.447742 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:41Z","lastTransitionTime":"2026-01-23T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.541023 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:18:20.750206722 +0000 UTC Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.550606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.550645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.550655 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.550673 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.550685 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:41Z","lastTransitionTime":"2026-01-23T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.594157 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:41 crc kubenswrapper[4760]: E0123 18:01:41.594280 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.594159 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:41 crc kubenswrapper[4760]: E0123 18:01:41.595133 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.653862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.653902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.653913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.653929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.653977 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:41Z","lastTransitionTime":"2026-01-23T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.756242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.756285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.756296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.756311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.756322 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:41Z","lastTransitionTime":"2026-01-23T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.858350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.858630 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.858639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.858652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.858660 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:41Z","lastTransitionTime":"2026-01-23T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.960621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.960684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.960700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.960724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:41 crc kubenswrapper[4760]: I0123 18:01:41.960741 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:41Z","lastTransitionTime":"2026-01-23T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.063287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.063442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.063457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.063473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.063485 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:42Z","lastTransitionTime":"2026-01-23T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.165329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.165368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.165376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.165390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.165400 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:42Z","lastTransitionTime":"2026-01-23T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.268648 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.268704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.268719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.268739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.268754 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:42Z","lastTransitionTime":"2026-01-23T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.371456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.371498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.371508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.371523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.371533 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:42Z","lastTransitionTime":"2026-01-23T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.473570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.473632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.473654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.473681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.473702 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:42Z","lastTransitionTime":"2026-01-23T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.541892 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:48:01.292061809 +0000 UTC Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.575976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.576050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.576075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.576107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.576130 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:42Z","lastTransitionTime":"2026-01-23T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.594647 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.594754 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:42 crc kubenswrapper[4760]: E0123 18:01:42.594815 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:42 crc kubenswrapper[4760]: E0123 18:01:42.594885 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.678725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.678782 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.678793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.678808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.678843 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:42Z","lastTransitionTime":"2026-01-23T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.780589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.780617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.780626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.780639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.780648 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:42Z","lastTransitionTime":"2026-01-23T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.883103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.883143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.883152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.883166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.883176 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:42Z","lastTransitionTime":"2026-01-23T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.984982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.985032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.985043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.985060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:42 crc kubenswrapper[4760]: I0123 18:01:42.985072 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:42Z","lastTransitionTime":"2026-01-23T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.087433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.087473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.087489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.087505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.087514 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:43Z","lastTransitionTime":"2026-01-23T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.189779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.189844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.189858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.189874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.189905 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:43Z","lastTransitionTime":"2026-01-23T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.292723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.292757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.292769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.292809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.292821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:43Z","lastTransitionTime":"2026-01-23T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.395690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.395761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.395778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.395802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.395818 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:43Z","lastTransitionTime":"2026-01-23T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.499331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.499392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.499443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.499468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.499486 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:43Z","lastTransitionTime":"2026-01-23T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.543003 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:52:54.274318608 +0000 UTC Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.595172 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.595190 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:43 crc kubenswrapper[4760]: E0123 18:01:43.595456 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:43 crc kubenswrapper[4760]: E0123 18:01:43.595565 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.603488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.603570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.603585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.603607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.603619 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:43Z","lastTransitionTime":"2026-01-23T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.706963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.707022 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.707037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.707066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.707080 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:43Z","lastTransitionTime":"2026-01-23T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.810119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.810172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.810183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.810199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.810212 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:43Z","lastTransitionTime":"2026-01-23T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.912537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.912573 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.912585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.912601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:43 crc kubenswrapper[4760]: I0123 18:01:43.912611 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:43Z","lastTransitionTime":"2026-01-23T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.015018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.015336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.015437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.015530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.015601 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:44Z","lastTransitionTime":"2026-01-23T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.117903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.117939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.117950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.117968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.117979 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:44Z","lastTransitionTime":"2026-01-23T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.220831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.220872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.220885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.220900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.220911 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:44Z","lastTransitionTime":"2026-01-23T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.323487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.323779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.323852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.323926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.324082 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:44Z","lastTransitionTime":"2026-01-23T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.426780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.426823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.426834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.426849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.426860 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:44Z","lastTransitionTime":"2026-01-23T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.529932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.529973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.529982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.529999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.530011 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:44Z","lastTransitionTime":"2026-01-23T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.544157 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:47:31.243030983 +0000 UTC Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.595063 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.595142 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:44 crc kubenswrapper[4760]: E0123 18:01:44.595306 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:44 crc kubenswrapper[4760]: E0123 18:01:44.595483 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.633354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.633400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.633468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.633494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.633558 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:44Z","lastTransitionTime":"2026-01-23T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.736658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.736716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.736726 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.736739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.736750 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:44Z","lastTransitionTime":"2026-01-23T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.839087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.839295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.839354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.839455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.839517 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:44Z","lastTransitionTime":"2026-01-23T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.941980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.942019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.942030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.942046 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:44 crc kubenswrapper[4760]: I0123 18:01:44.942056 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:44Z","lastTransitionTime":"2026-01-23T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.044451 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.044496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.044510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.044533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.044549 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:45Z","lastTransitionTime":"2026-01-23T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.147053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.147090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.147103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.147118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.147131 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:45Z","lastTransitionTime":"2026-01-23T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.250127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.250177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.250192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.250214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.250229 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:45Z","lastTransitionTime":"2026-01-23T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.353264 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.353324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.353333 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.353352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.353363 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:45Z","lastTransitionTime":"2026-01-23T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.455666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.455707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.455719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.455735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.455748 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:45Z","lastTransitionTime":"2026-01-23T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.545094 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:13:59.06250549 +0000 UTC Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.558262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.558298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.558310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.558327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.558339 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:45Z","lastTransitionTime":"2026-01-23T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.594897 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.595009 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:45 crc kubenswrapper[4760]: E0123 18:01:45.595027 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:45 crc kubenswrapper[4760]: E0123 18:01:45.595330 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.595530 4760 scope.go:117] "RemoveContainer" containerID="af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.613896 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.628566 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.648337 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z]\\\\nI0123 18:01:28.550350 6239 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Tem\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.661043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.661077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.661085 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.661100 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.661111 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:45Z","lastTransitionTime":"2026-01-23T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.661669 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.673969 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.684381 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.700023 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.709307 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.722539 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.738624 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.751012 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.759370 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.762766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.762786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.762795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.762810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.762819 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:45Z","lastTransitionTime":"2026-01-23T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.770555 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.779678 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.798102 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.810844 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.825472 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.865298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.865349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.865362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.865377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.865388 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:45Z","lastTransitionTime":"2026-01-23T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.910319 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/1.log" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.912895 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.913727 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.924762 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.949836 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.967840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.967893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.967906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.967923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.967938 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:45Z","lastTransitionTime":"2026-01-23T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.971157 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:45 crc kubenswrapper[4760]: I0123 18:01:45.990155 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z]\\\\nI0123 18:01:28.550350 6239 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Tem\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.001470 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.015523 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.015735 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.023854 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.027326 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.038513 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.052026 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.064015 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.070164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.070195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.070203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.070216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.070225 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:46Z","lastTransitionTime":"2026-01-23T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.076747 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.087521 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.099507 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.111120 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.129912 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.142989 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.155623 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.166164 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.176293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.176360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.176370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.176384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.176393 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:46Z","lastTransitionTime":"2026-01-23T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.182358 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.197135 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.208733 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.222396 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.236136 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.258252 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.275199 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.279566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.279608 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.279624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.279644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.279656 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:46Z","lastTransitionTime":"2026-01-23T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.291770 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.311981 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z]\\\\nI0123 18:01:28.550350 6239 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Tem\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.327509 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.347087 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.363504 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.377033 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.382512 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.382568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.382584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.382609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.382625 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:46Z","lastTransitionTime":"2026-01-23T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.390703 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.406821 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.420629 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.433615 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:46Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.486183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.486216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.486226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.486238 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.486247 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:46Z","lastTransitionTime":"2026-01-23T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.545866 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:30:36.98334829 +0000 UTC Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.588858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.588901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.588913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.588932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.588944 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:46Z","lastTransitionTime":"2026-01-23T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.595214 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.595233 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:46 crc kubenswrapper[4760]: E0123 18:01:46.595380 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:46 crc kubenswrapper[4760]: E0123 18:01:46.595526 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.680453 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:46 crc kubenswrapper[4760]: E0123 18:01:46.680701 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:46 crc kubenswrapper[4760]: E0123 18:01:46.680822 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs podName:009bf3d0-1239-4b72-8a29-8b5e5964bdac nodeName:}" failed. No retries permitted until 2026-01-23 18:02:02.680795847 +0000 UTC m=+65.683253960 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs") pod "network-metrics-daemon-sw8p8" (UID: "009bf3d0-1239-4b72-8a29-8b5e5964bdac") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.691841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.691890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.691903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.691921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.691934 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:46Z","lastTransitionTime":"2026-01-23T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.795070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.795154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.795178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.795212 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.795233 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:46Z","lastTransitionTime":"2026-01-23T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.897475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.897517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.897525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.897539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:46 crc kubenswrapper[4760]: I0123 18:01:46.897552 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:46Z","lastTransitionTime":"2026-01-23T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.000029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.000082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.000091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.000107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.000118 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:47Z","lastTransitionTime":"2026-01-23T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.102216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.102262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.102273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.102289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.102298 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:47Z","lastTransitionTime":"2026-01-23T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.205447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.205506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.205517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.205537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.205549 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:47Z","lastTransitionTime":"2026-01-23T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.287343 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287517 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:02:19.287487916 +0000 UTC m=+82.289945849 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.287570 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.287619 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.287666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.287699 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287773 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287800 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287828 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287835 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287841 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287849 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287852 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:02:19.287828585 +0000 UTC m=+82.290286588 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287861 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287893 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:02:19.287874807 +0000 UTC m=+82.290332740 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287913 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.287933 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:02:19.287923798 +0000 UTC m=+82.290381731 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.288044 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:02:19.288028801 +0000 UTC m=+82.290486734 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.308333 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.308395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.308436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.308462 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.308479 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:47Z","lastTransitionTime":"2026-01-23T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.411429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.411477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.411488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.411505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.411518 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:47Z","lastTransitionTime":"2026-01-23T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.514023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.514068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.514077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.514090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.514099 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:47Z","lastTransitionTime":"2026-01-23T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.546780 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:43:26.505023798 +0000 UTC Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.594779 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.594905 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.595872 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.596085 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.613369 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.616431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.616461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.616474 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.616490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.616501 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:47Z","lastTransitionTime":"2026-01-23T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.627647 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.644570 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.654876 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.666792 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.684388 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.697771 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.707717 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.718752 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.719354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.719537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.719635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.719728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.719817 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:47Z","lastTransitionTime":"2026-01-23T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.728923 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.745181 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.758014 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.770130 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.782818 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.794863 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.808471 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.822401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.822464 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.822476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.822496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.822509 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:47Z","lastTransitionTime":"2026-01-23T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.826709 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z]\\\\nI0123 18:01:28.550350 6239 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Tem\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.839789 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.920077 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/2.log" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.921019 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/1.log" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.923365 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7" exitCode=1 Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.923426 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.923470 4760 scope.go:117] "RemoveContainer" containerID="af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.924084 4760 scope.go:117] "RemoveContainer" containerID="65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7" Jan 23 18:01:47 crc kubenswrapper[4760]: E0123 18:01:47.924214 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.924704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.924728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.924740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.924752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.924763 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:47Z","lastTransitionTime":"2026-01-23T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.935726 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.945047 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.954817 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.963294 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.973733 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:47 crc kubenswrapper[4760]: I0123 18:01:47.987019 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.001309 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:47Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.011582 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.022431 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.026545 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.026580 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.026591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.026606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.026616 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:48Z","lastTransitionTime":"2026-01-23T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.031437 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.047532 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.057284 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.067197 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.077393 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.090295 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.101195 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.117866 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z]\\\\nI0123 18:01:28.550350 6239 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Tem\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"message\\\":\\\"object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 18:01:46.322508 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322512 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322519 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322528 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322358 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 18:01:46.322530 6455 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nF0123 18:01:46.322440 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.128469 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:48Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.129506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.129548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.129561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.129577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.129589 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:48Z","lastTransitionTime":"2026-01-23T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.231921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.231967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.231977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.231995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.232006 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:48Z","lastTransitionTime":"2026-01-23T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.333859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.333901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.333911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.333926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.333937 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:48Z","lastTransitionTime":"2026-01-23T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.436465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.436504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.436514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.436530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.436540 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:48Z","lastTransitionTime":"2026-01-23T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.539509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.539584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.539604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.539629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.539648 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:48Z","lastTransitionTime":"2026-01-23T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.547683 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:40:20.031720251 +0000 UTC Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.594355 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.594393 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:48 crc kubenswrapper[4760]: E0123 18:01:48.594537 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:48 crc kubenswrapper[4760]: E0123 18:01:48.594634 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.641939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.641971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.641980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.641992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.642001 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:48Z","lastTransitionTime":"2026-01-23T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.744888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.744937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.744951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.744968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.744979 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:48Z","lastTransitionTime":"2026-01-23T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.848064 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.848131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.848150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.848176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.848199 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:48Z","lastTransitionTime":"2026-01-23T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.931480 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/2.log" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.950582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.950614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.950621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.950634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:48 crc kubenswrapper[4760]: I0123 18:01:48.950642 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:48Z","lastTransitionTime":"2026-01-23T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.053106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.053139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.053150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.053167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.053179 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.156819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.156936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.156950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.156978 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.156995 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.259277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.259323 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.259366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.259383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.259397 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.305730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.305790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.305805 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.305827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.305843 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: E0123 18:01:49.325861 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:49Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.333697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.333750 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.333762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.333779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.333790 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: E0123 18:01:49.351683 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:49Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.356035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.356114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.356137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.356164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.356184 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: E0123 18:01:49.372801 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:49Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.377059 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.377122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.377138 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.377158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.377171 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: E0123 18:01:49.397043 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:49Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.401915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.401982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.402006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.402035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.402056 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: E0123 18:01:49.417075 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:49Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:49 crc kubenswrapper[4760]: E0123 18:01:49.417219 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.418748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.418778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.418790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.418804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.418815 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.521453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.521494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.521505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.521521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.521533 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.548002 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:45:08.532698157 +0000 UTC Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.595061 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.595122 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:49 crc kubenswrapper[4760]: E0123 18:01:49.595204 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:49 crc kubenswrapper[4760]: E0123 18:01:49.595331 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.624019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.624051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.624059 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.624072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.624083 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.726362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.726432 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.726450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.726475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.726495 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.829077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.829118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.829129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.829144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.829154 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.931631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.931699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.931720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.931741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:49 crc kubenswrapper[4760]: I0123 18:01:49.931760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:49Z","lastTransitionTime":"2026-01-23T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.034206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.034277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.034289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.034310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.034324 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:50Z","lastTransitionTime":"2026-01-23T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.137235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.137269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.137281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.137298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.137309 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:50Z","lastTransitionTime":"2026-01-23T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.240150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.240189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.240199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.240215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.240224 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:50Z","lastTransitionTime":"2026-01-23T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.343162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.343209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.343225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.343249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.343265 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:50Z","lastTransitionTime":"2026-01-23T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.446563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.446635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.446659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.446689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.446713 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:50Z","lastTransitionTime":"2026-01-23T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.548185 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:34:45.24770577 +0000 UTC Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.548969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.549033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.549056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.549089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.549148 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:50Z","lastTransitionTime":"2026-01-23T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.594456 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.594455 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:50 crc kubenswrapper[4760]: E0123 18:01:50.594676 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:50 crc kubenswrapper[4760]: E0123 18:01:50.594688 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.651181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.651220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.651230 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.651242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.651251 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:50Z","lastTransitionTime":"2026-01-23T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.753958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.754014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.754032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.754055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.754079 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:50Z","lastTransitionTime":"2026-01-23T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.857925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.857987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.858006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.858031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.858050 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:50Z","lastTransitionTime":"2026-01-23T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.960740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.960808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.960826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.960851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:50 crc kubenswrapper[4760]: I0123 18:01:50.960873 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:50Z","lastTransitionTime":"2026-01-23T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.064089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.064140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.064152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.064168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.064181 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:51Z","lastTransitionTime":"2026-01-23T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.166230 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.166254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.166262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.166274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.166282 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:51Z","lastTransitionTime":"2026-01-23T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.268266 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.268298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.268306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.268319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.268328 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:51Z","lastTransitionTime":"2026-01-23T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.371047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.371094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.371109 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.371126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.371139 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:51Z","lastTransitionTime":"2026-01-23T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.473224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.473270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.473283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.473300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.473312 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:51Z","lastTransitionTime":"2026-01-23T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.548701 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:31:19.236740266 +0000 UTC Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.576599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.576636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.576646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.576662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.576673 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:51Z","lastTransitionTime":"2026-01-23T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.594970 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.595039 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:51 crc kubenswrapper[4760]: E0123 18:01:51.595090 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:51 crc kubenswrapper[4760]: E0123 18:01:51.595195 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.678864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.678903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.678919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.678939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.678950 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:51Z","lastTransitionTime":"2026-01-23T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.780956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.780996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.781009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.781027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.781038 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:51Z","lastTransitionTime":"2026-01-23T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.882794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.882839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.882848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.882866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.882884 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:51Z","lastTransitionTime":"2026-01-23T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.984397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.984730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.984942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.985166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:51 crc kubenswrapper[4760]: I0123 18:01:51.985357 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:51Z","lastTransitionTime":"2026-01-23T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.087764 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.087828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.087847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.087872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.087888 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:52Z","lastTransitionTime":"2026-01-23T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.191033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.191104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.191127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.191156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.191198 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:52Z","lastTransitionTime":"2026-01-23T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.293566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.293640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.293660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.293688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.293711 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:52Z","lastTransitionTime":"2026-01-23T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.395652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.395687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.395698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.395715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.395726 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:52Z","lastTransitionTime":"2026-01-23T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.498383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.498436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.498450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.498465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.498472 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:52Z","lastTransitionTime":"2026-01-23T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.549280 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:44:11.010667447 +0000 UTC Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.594984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.595009 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:52 crc kubenswrapper[4760]: E0123 18:01:52.595133 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:52 crc kubenswrapper[4760]: E0123 18:01:52.595230 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.601121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.601184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.601199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.601226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.601239 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:52Z","lastTransitionTime":"2026-01-23T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.703693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.703735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.703751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.703772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.703787 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:52Z","lastTransitionTime":"2026-01-23T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.806791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.806843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.806854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.806872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.806883 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:52Z","lastTransitionTime":"2026-01-23T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.909295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.909350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.909364 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.909381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:52 crc kubenswrapper[4760]: I0123 18:01:52.909391 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:52Z","lastTransitionTime":"2026-01-23T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.011947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.011980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.011988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.012001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.012012 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:53Z","lastTransitionTime":"2026-01-23T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.114967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.115022 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.115035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.115064 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.115083 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:53Z","lastTransitionTime":"2026-01-23T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.217956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.218076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.218102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.218133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.218155 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:53Z","lastTransitionTime":"2026-01-23T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.320840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.320934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.320952 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.321215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.321234 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:53Z","lastTransitionTime":"2026-01-23T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.424275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.424314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.424323 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.424338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.424348 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:53Z","lastTransitionTime":"2026-01-23T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.527647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.527711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.527729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.527756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.527773 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:53Z","lastTransitionTime":"2026-01-23T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.550170 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:43:31.821497294 +0000 UTC Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.594713 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.594757 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:53 crc kubenswrapper[4760]: E0123 18:01:53.594919 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:53 crc kubenswrapper[4760]: E0123 18:01:53.595005 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.631180 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.631235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.631250 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.631270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.631286 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:53Z","lastTransitionTime":"2026-01-23T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.733981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.734057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.734080 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.734114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.734138 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:53Z","lastTransitionTime":"2026-01-23T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.837316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.837385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.837397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.837438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.837454 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:53Z","lastTransitionTime":"2026-01-23T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.940364 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.940466 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.940482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.940506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:53 crc kubenswrapper[4760]: I0123 18:01:53.940521 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:53Z","lastTransitionTime":"2026-01-23T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.043552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.043619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.043641 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.043670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.043696 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:54Z","lastTransitionTime":"2026-01-23T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.147660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.147709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.147721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.147739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.147749 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:54Z","lastTransitionTime":"2026-01-23T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.250960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.251013 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.251022 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.251041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.251052 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:54Z","lastTransitionTime":"2026-01-23T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.353755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.353805 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.353815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.353830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.353842 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:54Z","lastTransitionTime":"2026-01-23T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.456443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.456492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.456502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.456518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.456530 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:54Z","lastTransitionTime":"2026-01-23T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.550589 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:32:11.18093138 +0000 UTC Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.558821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.558886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.558902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.558923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.558938 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:54Z","lastTransitionTime":"2026-01-23T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.594190 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.594251 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:54 crc kubenswrapper[4760]: E0123 18:01:54.594389 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:54 crc kubenswrapper[4760]: E0123 18:01:54.594607 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.660890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.660927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.660936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.660951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.660962 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:54Z","lastTransitionTime":"2026-01-23T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.763460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.763506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.763523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.763539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.763549 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:54Z","lastTransitionTime":"2026-01-23T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.865228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.865312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.865322 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.865337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.865347 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:54Z","lastTransitionTime":"2026-01-23T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.967012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.967049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.967060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.967074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:54 crc kubenswrapper[4760]: I0123 18:01:54.967086 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:54Z","lastTransitionTime":"2026-01-23T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.069368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.069441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.069458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.069480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.069497 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:55Z","lastTransitionTime":"2026-01-23T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.171356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.171396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.171420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.171435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.171451 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:55Z","lastTransitionTime":"2026-01-23T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.273478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.273514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.273525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.273547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.273559 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:55Z","lastTransitionTime":"2026-01-23T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.375645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.375704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.375721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.375736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.375746 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:55Z","lastTransitionTime":"2026-01-23T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.481593 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.481646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.481663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.481686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.481702 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:55Z","lastTransitionTime":"2026-01-23T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.551156 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:41:34.20186111 +0000 UTC Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.585393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.585491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.585509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.585540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.585560 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:55Z","lastTransitionTime":"2026-01-23T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.594814 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:55 crc kubenswrapper[4760]: E0123 18:01:55.594965 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.594814 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:55 crc kubenswrapper[4760]: E0123 18:01:55.595673 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.687865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.687913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.687922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.687940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.687957 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:55Z","lastTransitionTime":"2026-01-23T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.789833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.789873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.789882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.789896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.789906 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:55Z","lastTransitionTime":"2026-01-23T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.892442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.892475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.892491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.892513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.892524 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:55Z","lastTransitionTime":"2026-01-23T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.994797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.994888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.994906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.994928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:55 crc kubenswrapper[4760]: I0123 18:01:55.994943 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:55Z","lastTransitionTime":"2026-01-23T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.097730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.097775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.097786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.097802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.097812 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:56Z","lastTransitionTime":"2026-01-23T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.200343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.200782 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.200886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.200982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.201054 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:56Z","lastTransitionTime":"2026-01-23T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.304107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.304161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.304178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.304203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.304222 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:56Z","lastTransitionTime":"2026-01-23T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.406184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.406217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.406227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.406241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.406250 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:56Z","lastTransitionTime":"2026-01-23T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.508859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.508902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.508911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.508926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.508935 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:56Z","lastTransitionTime":"2026-01-23T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.551774 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:14:13.431399727 +0000 UTC Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.594559 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.594573 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:56 crc kubenswrapper[4760]: E0123 18:01:56.595217 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:56 crc kubenswrapper[4760]: E0123 18:01:56.595444 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.610901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.610934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.610947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.610962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.610973 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:56Z","lastTransitionTime":"2026-01-23T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.714174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.714217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.714227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.714242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.714251 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:56Z","lastTransitionTime":"2026-01-23T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.816785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.816845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.816859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.816880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.816893 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:56Z","lastTransitionTime":"2026-01-23T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.919349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.919380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.919390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.919402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:56 crc kubenswrapper[4760]: I0123 18:01:56.919435 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:56Z","lastTransitionTime":"2026-01-23T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.021638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.021674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.021684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.021698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.021707 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:57Z","lastTransitionTime":"2026-01-23T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.123669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.123717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.123728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.123743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.123754 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:57Z","lastTransitionTime":"2026-01-23T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.227144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.227283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.227311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.227338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.227356 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:57Z","lastTransitionTime":"2026-01-23T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.330293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.330366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.330384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.330473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.330516 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:57Z","lastTransitionTime":"2026-01-23T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.432728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.432771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.432781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.432795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.432806 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:57Z","lastTransitionTime":"2026-01-23T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.535066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.535120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.535137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.535160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.535176 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:57Z","lastTransitionTime":"2026-01-23T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.552499 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:13:44.612737187 +0000 UTC Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.595268 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:57 crc kubenswrapper[4760]: E0123 18:01:57.595576 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.595309 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:57 crc kubenswrapper[4760]: E0123 18:01:57.595991 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.610684 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.630583 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.637626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.637979 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.638095 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.638171 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.638239 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:57Z","lastTransitionTime":"2026-01-23T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.649502 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.669712 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.685069 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.698004 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.716966 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.731496 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.741292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.741338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.741353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.741369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.741380 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:57Z","lastTransitionTime":"2026-01-23T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.744964 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.761178 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.785212 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.804377 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.823351 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.838206 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.844563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.844601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.844611 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.844626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.844639 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:57Z","lastTransitionTime":"2026-01-23T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.851469 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.868530 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.887312 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.907825 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af0bb9386606a27830a206e35bdf38fbd3cd71df16351748928ad6253eb10eed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:28Z is after 2025-08-24T17:21:41Z]\\\\nI0123 18:01:28.550350 6239 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Tem\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"message\\\":\\\"object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 18:01:46.322508 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322512 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322519 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322528 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322358 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 18:01:46.322530 6455 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nF0123 18:01:46.322440 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:57Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.946904 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.946970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.946980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.947000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:57 crc kubenswrapper[4760]: I0123 18:01:57.947012 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:57Z","lastTransitionTime":"2026-01-23T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.050180 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.050218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.050226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.050241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.050251 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:58Z","lastTransitionTime":"2026-01-23T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.153658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.153705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.153714 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.153732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.153741 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:58Z","lastTransitionTime":"2026-01-23T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.255653 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.255695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.255705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.255720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.255730 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:58Z","lastTransitionTime":"2026-01-23T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.358746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.358808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.358823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.358846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.358866 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:58Z","lastTransitionTime":"2026-01-23T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.462064 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.462245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.462270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.462297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.462312 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:58Z","lastTransitionTime":"2026-01-23T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.553227 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 17:24:45.552020017 +0000 UTC Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.564920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.564980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.564989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.565003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.565013 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:58Z","lastTransitionTime":"2026-01-23T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.594512 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.594619 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:01:58 crc kubenswrapper[4760]: E0123 18:01:58.594642 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:01:58 crc kubenswrapper[4760]: E0123 18:01:58.594698 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.667734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.667800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.667815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.667836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.667851 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:58Z","lastTransitionTime":"2026-01-23T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.770735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.770792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.770806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.770826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.770841 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:58Z","lastTransitionTime":"2026-01-23T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.873009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.873049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.873062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.873079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.873089 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:58Z","lastTransitionTime":"2026-01-23T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.975574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.975619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.975631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.975648 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:58 crc kubenswrapper[4760]: I0123 18:01:58.975663 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:58Z","lastTransitionTime":"2026-01-23T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.078661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.078699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.078707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.078721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.078730 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.180827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.180904 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.180923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.180952 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.180970 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.283886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.283957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.283970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.283990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.284027 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.387120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.387236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.387261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.387377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.387564 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.491142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.491241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.491325 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.491399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.491526 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.553698 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:26:57.437598934 +0000 UTC Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.594323 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.594474 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:01:59 crc kubenswrapper[4760]: E0123 18:01:59.595117 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:01:59 crc kubenswrapper[4760]: E0123 18:01:59.595266 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.595294 4760 scope.go:117] "RemoveContainer" containerID="65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7" Jan 23 18:01:59 crc kubenswrapper[4760]: E0123 18:01:59.595710 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.596584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.596659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.596671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.596689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.596702 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.617952 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.632207 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.644904 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.656175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.656221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.656236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.656256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.656271 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.657773 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: E0123 18:01:59.671768 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.674133 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.680492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.680541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.680557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.680577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.680602 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.689568 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: E0123 18:01:59.693427 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.699352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.699422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.699435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.699453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.699488 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.700951 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: E0123 18:01:59.713421 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.714613 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.717103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.717140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.717152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.717168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.717179 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.731560 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: E0123 18:01:59.743565 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.744787 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.746289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.746312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.746322 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.746337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.746350 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: E0123 18:01:59.757012 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: E0123 18:01:59.757188 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.758669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.758689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.758698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.758725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.758734 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.764857 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.777236 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.788736 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.801125 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.811883 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.824717 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.839803 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.861918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.861989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.862000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.862043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.862058 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.868455 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"message\\\":\\\"object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 18:01:46.322508 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322512 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322519 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322528 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322358 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 18:01:46.322530 6455 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nF0123 18:01:46.322440 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:01:59Z is after 2025-08-24T17:21:41Z" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.964594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.964632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.964643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.964659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:01:59 crc kubenswrapper[4760]: I0123 18:01:59.964670 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:59Z","lastTransitionTime":"2026-01-23T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.067515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.067739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.067747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.067759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.067768 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:00Z","lastTransitionTime":"2026-01-23T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.169971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.170010 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.170021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.170037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.170047 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:00Z","lastTransitionTime":"2026-01-23T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.272275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.272372 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.272391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.272432 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.272457 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:00Z","lastTransitionTime":"2026-01-23T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.374906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.374948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.374959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.374975 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.374988 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:00Z","lastTransitionTime":"2026-01-23T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.477352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.477479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.477518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.477551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.477573 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:00Z","lastTransitionTime":"2026-01-23T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.554430 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:57:26.015292332 +0000 UTC Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.580124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.580167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.580181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.580199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.580212 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:00Z","lastTransitionTime":"2026-01-23T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.594674 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.594743 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:00 crc kubenswrapper[4760]: E0123 18:02:00.594819 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:00 crc kubenswrapper[4760]: E0123 18:02:00.594979 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.682547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.682607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.682627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.682653 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.682673 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:00Z","lastTransitionTime":"2026-01-23T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.784857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.784891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.784900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.784913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.784923 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:00Z","lastTransitionTime":"2026-01-23T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.887721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.887785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.887799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.887816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.887828 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:00Z","lastTransitionTime":"2026-01-23T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.990435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.990467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.990474 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.990488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:00 crc kubenswrapper[4760]: I0123 18:02:00.990496 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:00Z","lastTransitionTime":"2026-01-23T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.092877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.092910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.092919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.092933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.092942 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:01Z","lastTransitionTime":"2026-01-23T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.195224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.195263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.195272 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.195287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.195296 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:01Z","lastTransitionTime":"2026-01-23T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.298352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.298635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.298767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.298871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.298966 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:01Z","lastTransitionTime":"2026-01-23T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.401531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.401860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.401992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.402103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.402277 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:01Z","lastTransitionTime":"2026-01-23T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.505628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.505689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.505707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.505728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.505742 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:01Z","lastTransitionTime":"2026-01-23T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.555201 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:52:06.928098113 +0000 UTC Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.595024 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:01 crc kubenswrapper[4760]: E0123 18:02:01.595159 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.595219 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:01 crc kubenswrapper[4760]: E0123 18:02:01.595351 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.608087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.608134 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.608162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.608186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.608199 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:01Z","lastTransitionTime":"2026-01-23T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.711588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.711632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.711644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.711659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.711669 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:01Z","lastTransitionTime":"2026-01-23T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.814368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.814454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.814465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.814479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.814491 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:01Z","lastTransitionTime":"2026-01-23T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.916829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.916866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.916883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.916901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:01 crc kubenswrapper[4760]: I0123 18:02:01.916911 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:01Z","lastTransitionTime":"2026-01-23T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.019309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.019371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.019385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.019401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.019427 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:02Z","lastTransitionTime":"2026-01-23T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.121085 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.121126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.121138 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.121155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.121167 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:02Z","lastTransitionTime":"2026-01-23T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.223449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.223491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.223504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.223520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.223534 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:02Z","lastTransitionTime":"2026-01-23T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.326583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.326637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.326649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.326668 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.326680 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:02Z","lastTransitionTime":"2026-01-23T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.428889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.428926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.428938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.428954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.428964 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:02Z","lastTransitionTime":"2026-01-23T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.531954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.532004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.532026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.532054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.532075 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:02Z","lastTransitionTime":"2026-01-23T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.556056 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:59:45.065364591 +0000 UTC Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.594700 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.594760 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:02 crc kubenswrapper[4760]: E0123 18:02:02.594863 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:02 crc kubenswrapper[4760]: E0123 18:02:02.594940 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.634896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.634978 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.634993 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.635028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.635041 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:02Z","lastTransitionTime":"2026-01-23T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.738821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.738875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.738887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.738909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.738923 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:02Z","lastTransitionTime":"2026-01-23T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.747628 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:02 crc kubenswrapper[4760]: E0123 18:02:02.747854 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:02:02 crc kubenswrapper[4760]: E0123 18:02:02.747965 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs podName:009bf3d0-1239-4b72-8a29-8b5e5964bdac nodeName:}" failed. No retries permitted until 2026-01-23 18:02:34.74793333 +0000 UTC m=+97.750391263 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs") pod "network-metrics-daemon-sw8p8" (UID: "009bf3d0-1239-4b72-8a29-8b5e5964bdac") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.841524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.841557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.841565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.841596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.841607 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:02Z","lastTransitionTime":"2026-01-23T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.944248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.944299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.944312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.944330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:02 crc kubenswrapper[4760]: I0123 18:02:02.944345 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:02Z","lastTransitionTime":"2026-01-23T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.046632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.046672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.046682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.046699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.046707 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:03Z","lastTransitionTime":"2026-01-23T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.149093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.149132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.149142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.149158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.149168 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:03Z","lastTransitionTime":"2026-01-23T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.251457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.251491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.251502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.251519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.251531 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:03Z","lastTransitionTime":"2026-01-23T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.354191 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.354238 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.354250 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.354268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.354280 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:03Z","lastTransitionTime":"2026-01-23T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.457270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.457319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.457329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.457348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.457358 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:03Z","lastTransitionTime":"2026-01-23T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.556432 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:29:07.739060259 +0000 UTC Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.560029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.560073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.560085 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.560102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.560115 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:03Z","lastTransitionTime":"2026-01-23T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.594397 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.594452 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:03 crc kubenswrapper[4760]: E0123 18:02:03.594576 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:03 crc kubenswrapper[4760]: E0123 18:02:03.594736 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.663666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.663755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.663770 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.663789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.663808 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:03Z","lastTransitionTime":"2026-01-23T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.766519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.766555 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.766563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.766577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.766587 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:03Z","lastTransitionTime":"2026-01-23T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.868760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.868793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.868802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.868815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.868824 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:03Z","lastTransitionTime":"2026-01-23T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.971147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.971501 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.971606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.971697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:03 crc kubenswrapper[4760]: I0123 18:02:03.971772 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:03Z","lastTransitionTime":"2026-01-23T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.074290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.074349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.074362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.074385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.074399 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:04Z","lastTransitionTime":"2026-01-23T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.177062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.177330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.177477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.177591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.177697 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:04Z","lastTransitionTime":"2026-01-23T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.280187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.280233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.280246 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.280263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.280275 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:04Z","lastTransitionTime":"2026-01-23T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.382866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.382923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.382937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.382955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.382968 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:04Z","lastTransitionTime":"2026-01-23T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.485032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.485076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.485088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.485106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.485117 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:04Z","lastTransitionTime":"2026-01-23T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.556972 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:49:27.008203478 +0000 UTC Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.586997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.587036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.587044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.587059 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.587069 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:04Z","lastTransitionTime":"2026-01-23T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.594203 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.594298 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:04 crc kubenswrapper[4760]: E0123 18:02:04.594452 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:04 crc kubenswrapper[4760]: E0123 18:02:04.594591 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.689367 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.689431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.689446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.689463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.689476 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:04Z","lastTransitionTime":"2026-01-23T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.792147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.792187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.792198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.792214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.792227 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:04Z","lastTransitionTime":"2026-01-23T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.895717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.895771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.895784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.895807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.895822 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:04Z","lastTransitionTime":"2026-01-23T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.981088 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/0.log" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.981136 4760 generic.go:334] "Generic (PLEG): container finished" podID="ac96490a-85b1-48f4-99d1-2b7505744007" containerID="02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216" exitCode=1 Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.981163 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7ck54" event={"ID":"ac96490a-85b1-48f4-99d1-2b7505744007","Type":"ContainerDied","Data":"02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216"} Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.981536 4760 scope.go:117] "RemoveContainer" containerID="02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.995834 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.997760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.997819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.997830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.997847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:04 crc kubenswrapper[4760]: I0123 18:02:04.997858 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:04Z","lastTransitionTime":"2026-01-23T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.013541 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.035582 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"message\\\":\\\"object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 18:01:46.322508 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322512 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322519 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322528 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322358 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 18:01:46.322530 6455 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nF0123 18:01:46.322440 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.048014 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.063305 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.077152 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.090568 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.100765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.100833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.100847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.100866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.100880 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:05Z","lastTransitionTime":"2026-01-23T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.102024 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.114186 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"2026-01-23T18:01:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec\\\\n2026-01-23T18:01:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec to /host/opt/cni/bin/\\\\n2026-01-23T18:01:19Z [verbose] multus-daemon started\\\\n2026-01-23T18:01:19Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:02:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.127185 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.141507 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.149376 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.158035 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.168203 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.185446 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.196475 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.202638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.202666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.202679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.202695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.202706 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:05Z","lastTransitionTime":"2026-01-23T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.208715 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.219833 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.305061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.305103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.305116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.305135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.305148 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:05Z","lastTransitionTime":"2026-01-23T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.407763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.407843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.407865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.407884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.407895 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:05Z","lastTransitionTime":"2026-01-23T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.510484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.510522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.510533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.510547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.510558 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:05Z","lastTransitionTime":"2026-01-23T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.557884 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:52:17.097492547 +0000 UTC Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.594389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.594449 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:05 crc kubenswrapper[4760]: E0123 18:02:05.594520 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:05 crc kubenswrapper[4760]: E0123 18:02:05.594600 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.612640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.612689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.612704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.612722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.612733 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:05Z","lastTransitionTime":"2026-01-23T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.714976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.715068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.715079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.715094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.715103 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:05Z","lastTransitionTime":"2026-01-23T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.817348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.817381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.817392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.817423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.817434 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:05Z","lastTransitionTime":"2026-01-23T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.920727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.920799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.920820 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.920881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.920901 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:05Z","lastTransitionTime":"2026-01-23T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.986736 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/0.log" Jan 23 18:02:05 crc kubenswrapper[4760]: I0123 18:02:05.986801 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7ck54" event={"ID":"ac96490a-85b1-48f4-99d1-2b7505744007","Type":"ContainerStarted","Data":"02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b"} Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.004473 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.014766 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.023450 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.024186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.024228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.024291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.024361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.024382 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:06Z","lastTransitionTime":"2026-01-23T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.033488 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.049489 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.060778 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.077286 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"message\\\":\\\"object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 18:01:46.322508 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322512 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322519 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322528 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322358 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 18:01:46.322530 6455 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nF0123 18:01:46.322440 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.086691 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.098330 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.109054 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.120547 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.128777 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.128856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.128878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.128907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.128929 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:06Z","lastTransitionTime":"2026-01-23T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.131808 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.144179 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"2026-01-23T18:01:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec\\\\n2026-01-23T18:01:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec to /host/opt/cni/bin/\\\\n2026-01-23T18:01:19Z [verbose] multus-daemon started\\\\n2026-01-23T18:01:19Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:02:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:02:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.160586 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.178090 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.190347 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.203780 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.216209 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:06Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.231689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.231860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.231882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.231906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.231925 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:06Z","lastTransitionTime":"2026-01-23T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.334873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.334927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.334941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.334962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.334979 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:06Z","lastTransitionTime":"2026-01-23T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.438159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.438200 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.438211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.438232 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.438245 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:06Z","lastTransitionTime":"2026-01-23T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.540739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.540775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.540787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.540804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.540816 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:06Z","lastTransitionTime":"2026-01-23T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.558633 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:27:24.852571622 +0000 UTC Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.594503 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:06 crc kubenswrapper[4760]: E0123 18:02:06.594630 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.594828 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:06 crc kubenswrapper[4760]: E0123 18:02:06.594899 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.644259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.644338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.644355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.644377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.644439 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:06Z","lastTransitionTime":"2026-01-23T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.747219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.747263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.747274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.747289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.747300 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:06Z","lastTransitionTime":"2026-01-23T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.849845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.849914 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.849925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.849947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.849959 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:06Z","lastTransitionTime":"2026-01-23T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.952486 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.952553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.952570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.952593 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:06 crc kubenswrapper[4760]: I0123 18:02:06.952611 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:06Z","lastTransitionTime":"2026-01-23T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.056811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.056892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.056902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.056919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.056936 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:07Z","lastTransitionTime":"2026-01-23T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.158862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.158908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.158919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.158936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.158948 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:07Z","lastTransitionTime":"2026-01-23T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.260713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.260755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.260767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.260784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.260798 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:07Z","lastTransitionTime":"2026-01-23T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.363111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.363166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.363178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.363196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.363208 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:07Z","lastTransitionTime":"2026-01-23T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.466138 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.466176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.466186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.466201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.466212 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:07Z","lastTransitionTime":"2026-01-23T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.559792 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:59:52.904952584 +0000 UTC Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.568826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.568887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.568897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.568912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.568921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:07Z","lastTransitionTime":"2026-01-23T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.594256 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.594334 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:07 crc kubenswrapper[4760]: E0123 18:02:07.594548 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:07 crc kubenswrapper[4760]: E0123 18:02:07.594702 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.607769 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"2026-01-23T18:01:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec\\\\n2026-01-23T18:01:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec to /host/opt/cni/bin/\\\\n2026-01-23T18:01:19Z [verbose] multus-daemon started\\\\n2026-01-23T18:01:19Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:02:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:02:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.625713 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.639378 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.651246 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.667494 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.671297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.671358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.671369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.671384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.671393 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:07Z","lastTransitionTime":"2026-01-23T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.678368 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.692512 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.701846 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.713693 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.723669 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.742910 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.756906 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.766896 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.773095 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.773129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.773140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.773154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.773165 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:07Z","lastTransitionTime":"2026-01-23T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.776526 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.784528 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.795321 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.805787 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.824264 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"message\\\":\\\"object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 18:01:46.322508 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322512 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322519 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322528 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322358 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 18:01:46.322530 6455 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nF0123 18:01:46.322440 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:07Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.874753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.874988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.875000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.875017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.875029 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:07Z","lastTransitionTime":"2026-01-23T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.977305 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.977816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.977838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.977854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:07 crc kubenswrapper[4760]: I0123 18:02:07.977863 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:07Z","lastTransitionTime":"2026-01-23T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.080178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.080232 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.080246 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.080275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.080286 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:08Z","lastTransitionTime":"2026-01-23T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.182338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.182377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.182386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.182401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.182428 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:08Z","lastTransitionTime":"2026-01-23T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.283998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.284032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.284041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.284055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.284065 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:08Z","lastTransitionTime":"2026-01-23T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.386624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.386664 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.386674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.386689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.386698 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:08Z","lastTransitionTime":"2026-01-23T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.489215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.489243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.489251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.489267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.489278 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:08Z","lastTransitionTime":"2026-01-23T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.560443 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:39:34.954166023 +0000 UTC Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.591239 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.591276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.591289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.591306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.591318 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:08Z","lastTransitionTime":"2026-01-23T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.594689 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.594702 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:08 crc kubenswrapper[4760]: E0123 18:02:08.594822 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:08 crc kubenswrapper[4760]: E0123 18:02:08.594906 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.693077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.693113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.693125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.693139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.693152 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:08Z","lastTransitionTime":"2026-01-23T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.795302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.795357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.795367 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.795379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.795388 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:08Z","lastTransitionTime":"2026-01-23T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.897527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.897573 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.897584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.897601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:08 crc kubenswrapper[4760]: I0123 18:02:08.897613 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:08Z","lastTransitionTime":"2026-01-23T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.000728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.000789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.000802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.000817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.000832 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.103070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.103116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.103129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.103149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.103163 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.205497 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.205529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.205537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.205550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.205562 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.307750 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.307788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.307799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.307816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.307828 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.410038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.410076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.410087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.410102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.410113 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.511795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.511843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.511855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.511872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.511884 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.560840 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:54:23.73673655 +0000 UTC Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.595339 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.595508 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:09 crc kubenswrapper[4760]: E0123 18:02:09.595671 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:09 crc kubenswrapper[4760]: E0123 18:02:09.595856 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.613623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.613685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.613703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.613730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.613747 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.716337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.716371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.716379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.716392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.716401 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.785206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.785240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.785247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.785263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.785273 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: E0123 18:02:09.795916 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:09Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.799180 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.799211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.799221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.799237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.799250 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: E0123 18:02:09.810193 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:09Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.812773 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.812807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.812818 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.812836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.812848 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: E0123 18:02:09.823398 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:09Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.827108 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.827166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.827182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.827202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.827224 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: E0123 18:02:09.840849 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:09Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.843797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.843831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.843841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.843855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.843864 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: E0123 18:02:09.854064 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:09Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:09 crc kubenswrapper[4760]: E0123 18:02:09.854175 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.855674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.855704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.855715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.855731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.855743 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.958982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.959019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.959030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.959045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:09 crc kubenswrapper[4760]: I0123 18:02:09.959058 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:09Z","lastTransitionTime":"2026-01-23T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.061576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.061610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.061620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.061635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.061646 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:10Z","lastTransitionTime":"2026-01-23T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.163764 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.163826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.163846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.163874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.163933 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:10Z","lastTransitionTime":"2026-01-23T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.266475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.266527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.266546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.266564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.266578 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:10Z","lastTransitionTime":"2026-01-23T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.369541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.369578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.369590 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.369606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.369619 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:10Z","lastTransitionTime":"2026-01-23T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.472019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.472075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.472086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.472101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.472112 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:10Z","lastTransitionTime":"2026-01-23T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.561123 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:53:24.308452044 +0000 UTC Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.575148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.575224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.575251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.575284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.575306 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:10Z","lastTransitionTime":"2026-01-23T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.594564 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.594686 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:10 crc kubenswrapper[4760]: E0123 18:02:10.594806 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:10 crc kubenswrapper[4760]: E0123 18:02:10.594916 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.677689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.677749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.677788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.677814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.677832 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:10Z","lastTransitionTime":"2026-01-23T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.779881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.779921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.779939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.779956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.779967 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:10Z","lastTransitionTime":"2026-01-23T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.882696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.882745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.882761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.882785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.882802 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:10Z","lastTransitionTime":"2026-01-23T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.985224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.985293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.985312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.985336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:10 crc kubenswrapper[4760]: I0123 18:02:10.985355 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:10Z","lastTransitionTime":"2026-01-23T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.088546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.088614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.088631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.088657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.088674 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:11Z","lastTransitionTime":"2026-01-23T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.190898 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.190957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.190969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.190987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.190999 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:11Z","lastTransitionTime":"2026-01-23T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.293533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.293565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.293574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.293591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.293602 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:11Z","lastTransitionTime":"2026-01-23T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.396268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.396332 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.396350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.396374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.396392 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:11Z","lastTransitionTime":"2026-01-23T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.498821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.498881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.498898 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.498921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.498940 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:11Z","lastTransitionTime":"2026-01-23T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.561720 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:51:55.235065182 +0000 UTC Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.594242 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.594384 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:11 crc kubenswrapper[4760]: E0123 18:02:11.594455 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:11 crc kubenswrapper[4760]: E0123 18:02:11.594618 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.601338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.601384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.601395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.601437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.601450 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:11Z","lastTransitionTime":"2026-01-23T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.704256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.704299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.704310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.704327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.704339 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:11Z","lastTransitionTime":"2026-01-23T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.807329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.807443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.807469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.807493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.807509 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:11Z","lastTransitionTime":"2026-01-23T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.915566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.915607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.915618 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.915636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:11 crc kubenswrapper[4760]: I0123 18:02:11.915648 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:11Z","lastTransitionTime":"2026-01-23T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.018219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.018282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.018300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.018329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.018374 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:12Z","lastTransitionTime":"2026-01-23T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.121606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.121684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.121705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.121733 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.121753 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:12Z","lastTransitionTime":"2026-01-23T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.225383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.225470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.225488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.225514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.225536 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:12Z","lastTransitionTime":"2026-01-23T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.328790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.328857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.328875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.328900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.328921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:12Z","lastTransitionTime":"2026-01-23T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.431325 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.431425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.431446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.431471 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.431488 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:12Z","lastTransitionTime":"2026-01-23T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.533697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.533730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.533739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.533753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.533763 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:12Z","lastTransitionTime":"2026-01-23T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.562369 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:37:58.843704673 +0000 UTC Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.595109 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.595121 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:12 crc kubenswrapper[4760]: E0123 18:02:12.595233 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:12 crc kubenswrapper[4760]: E0123 18:02:12.595330 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.636387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.636436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.636447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.636461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.636471 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:12Z","lastTransitionTime":"2026-01-23T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.738241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.738281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.738291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.738306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.738316 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:12Z","lastTransitionTime":"2026-01-23T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.841800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.841856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.841877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.841920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.841951 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:12Z","lastTransitionTime":"2026-01-23T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.948347 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.948442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.948465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.948494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:12 crc kubenswrapper[4760]: I0123 18:02:12.948516 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:12Z","lastTransitionTime":"2026-01-23T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.051949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.051988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.051996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.052010 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.052021 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:13Z","lastTransitionTime":"2026-01-23T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.155512 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.155575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.155600 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.155632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.155656 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:13Z","lastTransitionTime":"2026-01-23T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.258729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.258792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.258811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.258838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.258857 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:13Z","lastTransitionTime":"2026-01-23T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.361759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.361818 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.361827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.361861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.361872 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:13Z","lastTransitionTime":"2026-01-23T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.463938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.464005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.464028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.464057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.464079 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:13Z","lastTransitionTime":"2026-01-23T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.562570 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:20:47.210662824 +0000 UTC Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.566741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.566782 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.566793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.566808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.566819 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:13Z","lastTransitionTime":"2026-01-23T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.594360 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.594397 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:13 crc kubenswrapper[4760]: E0123 18:02:13.594577 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:13 crc kubenswrapper[4760]: E0123 18:02:13.594672 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.669101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.669229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.669255 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.669278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.669293 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:13Z","lastTransitionTime":"2026-01-23T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.772585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.772619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.772627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.772657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.772668 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:13Z","lastTransitionTime":"2026-01-23T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.874313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.874439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.874468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.874498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.874520 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:13Z","lastTransitionTime":"2026-01-23T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.976753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.976822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.976840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.976866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:13 crc kubenswrapper[4760]: I0123 18:02:13.976884 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:13Z","lastTransitionTime":"2026-01-23T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.079907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.079972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.079995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.080023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.080044 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:14Z","lastTransitionTime":"2026-01-23T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.183254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.183348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.183389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.183457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.183529 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:14Z","lastTransitionTime":"2026-01-23T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.285749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.285809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.285827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.285849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.285866 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:14Z","lastTransitionTime":"2026-01-23T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.387795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.387889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.388318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.388370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.388390 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:14Z","lastTransitionTime":"2026-01-23T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.496343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.496450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.496478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.496509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.496532 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:14Z","lastTransitionTime":"2026-01-23T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.562994 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:44:48.655903151 +0000 UTC Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.594918 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.594995 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:14 crc kubenswrapper[4760]: E0123 18:02:14.595353 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:14 crc kubenswrapper[4760]: E0123 18:02:14.596291 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.596812 4760 scope.go:117] "RemoveContainer" containerID="65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.599253 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.599337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.599355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.599477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.599519 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:14Z","lastTransitionTime":"2026-01-23T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.702037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.702573 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.702895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.703190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.703646 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:14Z","lastTransitionTime":"2026-01-23T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.806146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.806431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.806522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.806619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.806701 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:14Z","lastTransitionTime":"2026-01-23T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.909041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.909354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.909463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.909563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:14 crc kubenswrapper[4760]: I0123 18:02:14.909644 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:14Z","lastTransitionTime":"2026-01-23T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.012440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.012490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.012501 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.012516 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.012525 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:15Z","lastTransitionTime":"2026-01-23T18:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.115370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.115518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.115594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.115621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.115641 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:15Z","lastTransitionTime":"2026-01-23T18:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.217638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.217694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.217707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.217725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.217738 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:15Z","lastTransitionTime":"2026-01-23T18:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.319963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.320018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.320029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.320050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.320064 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:15Z","lastTransitionTime":"2026-01-23T18:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.422923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.422970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.422979 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.422994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.423008 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:15Z","lastTransitionTime":"2026-01-23T18:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.525855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.525911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.525928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.525952 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.525966 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:15Z","lastTransitionTime":"2026-01-23T18:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.563519 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:59:55.905182474 +0000 UTC Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.595050 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.595159 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:15 crc kubenswrapper[4760]: E0123 18:02:15.595184 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:15 crc kubenswrapper[4760]: E0123 18:02:15.595354 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.628066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.628103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.628111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.628125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.628135 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:15Z","lastTransitionTime":"2026-01-23T18:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.730595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.730651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.730667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.730691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.730708 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:15Z","lastTransitionTime":"2026-01-23T18:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.832716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.832874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.832954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.833018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.833083 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:15Z","lastTransitionTime":"2026-01-23T18:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.934930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.934986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.934995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.935012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:15 crc kubenswrapper[4760]: I0123 18:02:15.935020 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:15Z","lastTransitionTime":"2026-01-23T18:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.030994 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/2.log" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.033619 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.034121 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.037491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.037527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.037537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.037553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.037565 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:16Z","lastTransitionTime":"2026-01-23T18:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.048038 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.065329 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"message\\\":\\\"object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 18:01:46.322508 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322512 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322519 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322528 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322358 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 18:01:46.322530 6455 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nF0123 18:01:46.322440 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.076298 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.091790 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.104091 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.118508 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.133495 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"2026-01-23T18:01:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec\\\\n2026-01-23T18:01:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec to /host/opt/cni/bin/\\\\n2026-01-23T18:01:19Z [verbose] multus-daemon started\\\\n2026-01-23T18:01:19Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:02:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:02:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.139927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.139958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.139965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.139980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.139992 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:16Z","lastTransitionTime":"2026-01-23T18:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.152877 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.171076 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.188745 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.200065 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.210224 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.221266 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.229478 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.238899 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.242534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.242560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.242572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.242588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.242598 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:16Z","lastTransitionTime":"2026-01-23T18:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.251461 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.268729 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.281073 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.345523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.345566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.345574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.345591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.345601 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:16Z","lastTransitionTime":"2026-01-23T18:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.447902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.447936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.447945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.447957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.447965 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:16Z","lastTransitionTime":"2026-01-23T18:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.550753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.550795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.550808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.550825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.550838 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:16Z","lastTransitionTime":"2026-01-23T18:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.563753 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:09:03.392396509 +0000 UTC Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.594449 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.594460 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:16 crc kubenswrapper[4760]: E0123 18:02:16.594586 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:16 crc kubenswrapper[4760]: E0123 18:02:16.594696 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.653218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.653261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.653290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.653306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.653315 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:16Z","lastTransitionTime":"2026-01-23T18:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.755913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.755962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.755974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.755992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.756004 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:16Z","lastTransitionTime":"2026-01-23T18:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.858429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.858466 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.858474 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.858490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.858499 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:16Z","lastTransitionTime":"2026-01-23T18:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.960341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.960381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.960393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.960430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:16 crc kubenswrapper[4760]: I0123 18:02:16.960442 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:16Z","lastTransitionTime":"2026-01-23T18:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.062748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.063030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.063101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.063174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.063246 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:17Z","lastTransitionTime":"2026-01-23T18:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.166256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.166296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.166308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.166323 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.166334 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:17Z","lastTransitionTime":"2026-01-23T18:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.268587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.268897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.268976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.269266 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.269423 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:17Z","lastTransitionTime":"2026-01-23T18:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.371835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.372164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.372265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.372331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.372401 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:17Z","lastTransitionTime":"2026-01-23T18:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.475088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.475123 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.475134 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.475172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.475193 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:17Z","lastTransitionTime":"2026-01-23T18:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.564627 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:28:25.770480622 +0000 UTC Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.577523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.577575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.577587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.577605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.577616 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:17Z","lastTransitionTime":"2026-01-23T18:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.595106 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:17 crc kubenswrapper[4760]: E0123 18:02:17.595239 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.595255 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:17 crc kubenswrapper[4760]: E0123 18:02:17.595330 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.606519 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.609958 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.621964 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"2026-01-23T18:01:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec\\\\n2026-01-23T18:01:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec to /host/opt/cni/bin/\\\\n2026-01-23T18:01:19Z [verbose] multus-daemon started\\\\n2026-01-23T18:01:19Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:02:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:02:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.636350 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.651859 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.664352 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.676525 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.679976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.680025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.680037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.680055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.680066 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:17Z","lastTransitionTime":"2026-01-23T18:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.687457 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.716339 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.733495 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.744526 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.755220 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.771868 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.782485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.782542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.782555 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.782571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.782581 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:17Z","lastTransitionTime":"2026-01-23T18:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.783610 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.794041 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.809228 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"message\\\":\\\"object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 18:01:46.322508 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322512 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322519 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322528 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322358 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 18:01:46.322530 6455 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nF0123 18:01:46.322440 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.820345 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.834468 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.846517 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.885121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.885154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.885165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.885177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.885187 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:17Z","lastTransitionTime":"2026-01-23T18:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.987378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.987422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.987433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.987446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:17 crc kubenswrapper[4760]: I0123 18:02:17.987455 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:17Z","lastTransitionTime":"2026-01-23T18:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.040979 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/3.log" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.041634 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/2.log" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.043719 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" exitCode=1 Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.043781 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f"} Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.043832 4760 scope.go:117] "RemoveContainer" containerID="65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.044465 4760 scope.go:117] "RemoveContainer" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:02:18 crc kubenswrapper[4760]: E0123 18:02:18.044618 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.057818 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.067972 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f596bc-e797-4212-a922-05d4f490ea0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34fdcb71cc1645d02a4a3eb4c244062e7defffc5fe2029c76bcfd46d69bb35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616ef6c428fe29187caae557f2c38644021a3b4abe827c7ff52bdea50884034b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616ef6c428fe29187caae557f2c38644021a3b4abe827c7ff52bdea50884034b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.080591 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.089464 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.089489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.089498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.089510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.089518 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:18Z","lastTransitionTime":"2026-01-23T18:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.094346 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.112574 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"message\\\":\\\"object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 18:01:46.322508 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322512 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322519 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322528 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322358 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 18:01:46.322530 6455 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nF0123 18:01:46.322440 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:17Z\\\",\\\"message\\\":\\\"nsuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0123 18:02:16.531969 6863 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0123 18:02:16.531980 6863 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0123 18:02:16.532004 6863 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0123 18:02:16.531922 6863 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.126809 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"2026-01-23T18:01:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec\\\\n2026-01-23T18:01:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec to /host/opt/cni/bin/\\\\n2026-01-23T18:01:19Z [verbose] multus-daemon started\\\\n2026-01-23T18:01:19Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:02:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:02:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.141163 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.152960 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.163503 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.179669 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.191363 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.192559 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.192594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.192603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.192620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.192631 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:18Z","lastTransitionTime":"2026-01-23T18:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.204277 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.216931 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.229940 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.242252 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.262296 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.277058 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.292743 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.295559 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.295607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.295621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.295643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.295656 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:18Z","lastTransitionTime":"2026-01-23T18:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.308296 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.398004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.398052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.398060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.398074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.398083 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:18Z","lastTransitionTime":"2026-01-23T18:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.501069 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.501135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.501148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.501168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.501183 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:18Z","lastTransitionTime":"2026-01-23T18:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.565113 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:03:49.038117798 +0000 UTC Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.594898 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:18 crc kubenswrapper[4760]: E0123 18:02:18.595629 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.595037 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:18 crc kubenswrapper[4760]: E0123 18:02:18.595958 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.603797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.603837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.603848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.603863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.603874 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:18Z","lastTransitionTime":"2026-01-23T18:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.706930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.706996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.707014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.707041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.707058 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:18Z","lastTransitionTime":"2026-01-23T18:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.809545 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.809603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.809621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.809644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.809661 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:18Z","lastTransitionTime":"2026-01-23T18:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.912839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.912891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.912904 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.912918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:18 crc kubenswrapper[4760]: I0123 18:02:18.912927 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:18Z","lastTransitionTime":"2026-01-23T18:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.015892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.015933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.015944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.015966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.015978 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:19Z","lastTransitionTime":"2026-01-23T18:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.048857 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/3.log" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.117918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.117966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.117977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.117992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.118003 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:19Z","lastTransitionTime":"2026-01-23T18:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.221011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.221067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.221090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.221118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.221137 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:19Z","lastTransitionTime":"2026-01-23T18:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.317326 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.317450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.317500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317538 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:23.317513052 +0000 UTC m=+146.319970985 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.317570 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.317614 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317629 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317648 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317661 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317710 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:03:23.317699218 +0000 UTC m=+146.320157151 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317709 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317731 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317742 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317769 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:03:23.317760299 +0000 UTC m=+146.320218232 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317798 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317902 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:03:23.317880993 +0000 UTC m=+146.320338946 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.317909 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.318083 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:03:23.318055378 +0000 UTC m=+146.320513341 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.323789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.323901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.323939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.323968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.323992 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:19Z","lastTransitionTime":"2026-01-23T18:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.426151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.426238 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.426279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.426299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.426312 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:19Z","lastTransitionTime":"2026-01-23T18:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.529471 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.529520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.529543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.529567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.529583 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:19Z","lastTransitionTime":"2026-01-23T18:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.566025 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:14:12.390670412 +0000 UTC Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.594598 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.594684 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.595001 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:19 crc kubenswrapper[4760]: E0123 18:02:19.595112 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.632154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.632189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.632197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.632228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.632237 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:19Z","lastTransitionTime":"2026-01-23T18:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.735176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.735225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.735234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.735248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.735261 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:19Z","lastTransitionTime":"2026-01-23T18:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.838744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.838826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.838849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.838874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.838903 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:19Z","lastTransitionTime":"2026-01-23T18:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.941023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.941061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.941069 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.941083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:19 crc kubenswrapper[4760]: I0123 18:02:19.941094 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:19Z","lastTransitionTime":"2026-01-23T18:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.043495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.043549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.043562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.043578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.043590 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.133733 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.133791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.133801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.133817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.133827 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: E0123 18:02:20.146535 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.150151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.150189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.150202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.150220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.150232 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: E0123 18:02:20.162936 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.167450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.167499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.167541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.167559 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.167577 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: E0123 18:02:20.181377 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.185100 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.185144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.185156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.185175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.185188 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: E0123 18:02:20.200691 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.206280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.206365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.206380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.206426 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.206443 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: E0123 18:02:20.222090 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:20Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:20 crc kubenswrapper[4760]: E0123 18:02:20.222300 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.224264 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.224302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.224314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.224336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.224354 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.327899 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.327943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.327953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.327968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.327982 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.430165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.430209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.430222 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.430240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.430252 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.532354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.532424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.532436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.532455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.532468 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.567208 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:16:24.173698385 +0000 UTC Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.594638 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:20 crc kubenswrapper[4760]: E0123 18:02:20.594771 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.595248 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:20 crc kubenswrapper[4760]: E0123 18:02:20.595324 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.634150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.634188 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.634220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.634236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.634245 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.737545 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.737600 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.737616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.737644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.737658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.840174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.840215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.840225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.840241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.840254 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.942475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.942535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.942546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.942559 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:20 crc kubenswrapper[4760]: I0123 18:02:20.942567 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:20Z","lastTransitionTime":"2026-01-23T18:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.045195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.045241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.045249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.045264 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.045273 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:21Z","lastTransitionTime":"2026-01-23T18:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.148061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.148121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.148136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.148161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.148178 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:21Z","lastTransitionTime":"2026-01-23T18:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.250770 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.250833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.250853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.250883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.250906 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:21Z","lastTransitionTime":"2026-01-23T18:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.352681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.352744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.352757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.352775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.352789 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:21Z","lastTransitionTime":"2026-01-23T18:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.455100 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.455135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.455149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.455165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.455177 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:21Z","lastTransitionTime":"2026-01-23T18:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.557358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.557437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.557450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.557468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.557481 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:21Z","lastTransitionTime":"2026-01-23T18:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.568723 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:16:12.46005993 +0000 UTC Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.594343 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.594439 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:21 crc kubenswrapper[4760]: E0123 18:02:21.594541 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:21 crc kubenswrapper[4760]: E0123 18:02:21.594622 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.660174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.660245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.660268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.660297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.660319 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:21Z","lastTransitionTime":"2026-01-23T18:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.762463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.762526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.762542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.762565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.762582 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:21Z","lastTransitionTime":"2026-01-23T18:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.865380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.865482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.865493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.865511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.865522 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:21Z","lastTransitionTime":"2026-01-23T18:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.968810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.968839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.968849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.968869 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:21 crc kubenswrapper[4760]: I0123 18:02:21.968886 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:21Z","lastTransitionTime":"2026-01-23T18:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.072487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.072572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.072595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.072628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.072651 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:22Z","lastTransitionTime":"2026-01-23T18:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.175251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.175296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.175306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.175321 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.175333 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:22Z","lastTransitionTime":"2026-01-23T18:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.278135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.278211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.278225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.278251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.278269 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:22Z","lastTransitionTime":"2026-01-23T18:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.381808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.381856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.381868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.381886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.381900 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:22Z","lastTransitionTime":"2026-01-23T18:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.484894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.484940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.484950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.484968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.484978 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:22Z","lastTransitionTime":"2026-01-23T18:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.569569 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 05:30:35.413736285 +0000 UTC Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.587280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.587324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.587339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.587358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.587374 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:22Z","lastTransitionTime":"2026-01-23T18:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.594914 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.595045 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:22 crc kubenswrapper[4760]: E0123 18:02:22.595095 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:22 crc kubenswrapper[4760]: E0123 18:02:22.595220 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.689534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.689575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.689584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.689597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.689606 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:22Z","lastTransitionTime":"2026-01-23T18:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.791960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.792065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.792079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.792097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.792114 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:22Z","lastTransitionTime":"2026-01-23T18:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.894493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.894568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.894592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.894620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.894643 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:22Z","lastTransitionTime":"2026-01-23T18:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.997355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.997429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.997443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.997458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:22 crc kubenswrapper[4760]: I0123 18:02:22.997467 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:22Z","lastTransitionTime":"2026-01-23T18:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.100324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.100401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.100459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.100488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.100509 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:23Z","lastTransitionTime":"2026-01-23T18:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.203270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.203541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.203614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.203678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.203742 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:23Z","lastTransitionTime":"2026-01-23T18:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.306106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.306148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.306160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.306177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.306188 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:23Z","lastTransitionTime":"2026-01-23T18:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.409310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.409359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.409370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.409386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.409398 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:23Z","lastTransitionTime":"2026-01-23T18:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.511889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.511927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.511935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.511949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.511958 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:23Z","lastTransitionTime":"2026-01-23T18:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.569703 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:27:45.399825125 +0000 UTC Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.595160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.595219 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:23 crc kubenswrapper[4760]: E0123 18:02:23.595322 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:23 crc kubenswrapper[4760]: E0123 18:02:23.595445 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.614150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.614229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.614249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.614277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.614301 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:23Z","lastTransitionTime":"2026-01-23T18:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.717382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.717441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.717457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.717470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.717480 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:23Z","lastTransitionTime":"2026-01-23T18:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.820196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.820265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.820276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.820292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.820304 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:23Z","lastTransitionTime":"2026-01-23T18:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.922448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.922504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.922515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.922534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:23 crc kubenswrapper[4760]: I0123 18:02:23.922547 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:23Z","lastTransitionTime":"2026-01-23T18:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.025900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.025945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.025954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.025969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.025982 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:24Z","lastTransitionTime":"2026-01-23T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.128154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.128201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.128214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.128229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.128240 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:24Z","lastTransitionTime":"2026-01-23T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.230246 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.230290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.230301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.230319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.230332 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:24Z","lastTransitionTime":"2026-01-23T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.332609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.332680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.332696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.332723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.332759 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:24Z","lastTransitionTime":"2026-01-23T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.435873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.435955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.435970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.435988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.436000 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:24Z","lastTransitionTime":"2026-01-23T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.539125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.539191 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.539208 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.539228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.539242 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:24Z","lastTransitionTime":"2026-01-23T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.570044 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:21:25.127746837 +0000 UTC Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.594579 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.594757 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:24 crc kubenswrapper[4760]: E0123 18:02:24.594849 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:24 crc kubenswrapper[4760]: E0123 18:02:24.594981 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.641802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.641854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.641865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.641883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.641896 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:24Z","lastTransitionTime":"2026-01-23T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.744833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.744884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.744892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.744907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.744918 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:24Z","lastTransitionTime":"2026-01-23T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.848031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.848093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.848108 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.848127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.848142 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:24Z","lastTransitionTime":"2026-01-23T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.951543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.951601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.951612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.951632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:24 crc kubenswrapper[4760]: I0123 18:02:24.951645 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:24Z","lastTransitionTime":"2026-01-23T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.054437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.054470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.054480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.054492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.054502 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:25Z","lastTransitionTime":"2026-01-23T18:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.157515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.157563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.157574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.157589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.157601 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:25Z","lastTransitionTime":"2026-01-23T18:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.260907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.260986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.261016 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.261036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.261046 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:25Z","lastTransitionTime":"2026-01-23T18:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.362895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.362947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.362959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.362976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.362991 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:25Z","lastTransitionTime":"2026-01-23T18:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.466076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.466161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.466184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.466218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.466241 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:25Z","lastTransitionTime":"2026-01-23T18:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.571077 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:03:36.870011832 +0000 UTC Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.571674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.571708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.571720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.571738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.571753 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:25Z","lastTransitionTime":"2026-01-23T18:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.594559 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:25 crc kubenswrapper[4760]: E0123 18:02:25.594684 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.594576 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:25 crc kubenswrapper[4760]: E0123 18:02:25.594843 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.673960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.673989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.674000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.674014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.674023 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:25Z","lastTransitionTime":"2026-01-23T18:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.776799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.776830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.776842 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.776857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.776869 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:25Z","lastTransitionTime":"2026-01-23T18:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.879210 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.879265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.879284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.879308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.879324 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:25Z","lastTransitionTime":"2026-01-23T18:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.982388 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.982421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.982430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.982444 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:25 crc kubenswrapper[4760]: I0123 18:02:25.982453 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:25Z","lastTransitionTime":"2026-01-23T18:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.085232 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.085308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.085320 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.085337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.085347 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:26Z","lastTransitionTime":"2026-01-23T18:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.187831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.187866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.187877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.187892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.187903 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:26Z","lastTransitionTime":"2026-01-23T18:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.290090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.290150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.290167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.290189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.290207 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:26Z","lastTransitionTime":"2026-01-23T18:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.392085 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.392135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.392155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.392174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.392185 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:26Z","lastTransitionTime":"2026-01-23T18:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.494964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.495031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.495066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.495103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.495128 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:26Z","lastTransitionTime":"2026-01-23T18:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.571629 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 20:50:02.402100642 +0000 UTC Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.594251 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.594301 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:26 crc kubenswrapper[4760]: E0123 18:02:26.594395 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:26 crc kubenswrapper[4760]: E0123 18:02:26.594553 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.597115 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.597147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.597158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.597173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.597188 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:26Z","lastTransitionTime":"2026-01-23T18:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.699928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.699975 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.699985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.700002 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.700014 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:26Z","lastTransitionTime":"2026-01-23T18:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.802195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.802244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.802256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.802270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.802280 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:26Z","lastTransitionTime":"2026-01-23T18:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.904491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.904603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.904617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.904642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:26 crc kubenswrapper[4760]: I0123 18:02:26.904658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:26Z","lastTransitionTime":"2026-01-23T18:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.007482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.007540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.007557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.007579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.007595 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:27Z","lastTransitionTime":"2026-01-23T18:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.110615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.110667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.110680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.110698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.110708 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:27Z","lastTransitionTime":"2026-01-23T18:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.213079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.213153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.213167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.213183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.213195 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:27Z","lastTransitionTime":"2026-01-23T18:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.315388 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.315470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.315479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.315494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.315503 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:27Z","lastTransitionTime":"2026-01-23T18:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.417391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.417448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.417458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.417473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.417484 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:27Z","lastTransitionTime":"2026-01-23T18:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.520362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.520428 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.520441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.520461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.520472 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:27Z","lastTransitionTime":"2026-01-23T18:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.572835 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:11:17.423806036 +0000 UTC Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.594237 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.594390 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:27 crc kubenswrapper[4760]: E0123 18:02:27.594610 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:27 crc kubenswrapper[4760]: E0123 18:02:27.594898 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.607484 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f596bc-e797-4212-a922-05d4f490ea0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34fdcb71cc1645d02a4a3eb4c244062e7defffc5fe2029c76bcfd46d69bb35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616ef6c428fe29187caae557f2c38644021a3b4abe827c7ff52bdea50884034b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616ef6c428fe29187caae557f2c38644021a3b4abe827c7ff52bdea50884034b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.622777 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.624752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.624796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.624815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.624849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.624871 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:27Z","lastTransitionTime":"2026-01-23T18:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.637479 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.658363 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65302c8db7b711fc4cc14bfa9fb77580e94d9c0fa84c08bb0dad6bd4402879f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"message\\\":\\\"object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 18:01:46.322508 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322512 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322519 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-vgrsn\\\\nI0123 18:01:46.322528 6455 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-7ck54\\\\nI0123 18:01:46.322358 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 18:01:46.322530 6455 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nF0123 18:01:46.322440 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:17Z\\\",\\\"message\\\":\\\"nsuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0123 18:02:16.531969 6863 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0123 18:02:16.531980 6863 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0123 18:02:16.532004 6863 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0123 18:02:16.531922 6863 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.669344 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.684005 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.693963 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.703688 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.718762 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.726698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.726741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.726753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.726770 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.726781 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:27Z","lastTransitionTime":"2026-01-23T18:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.729567 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.741608 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"2026-01-23T18:01:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec\\\\n2026-01-23T18:01:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec to /host/opt/cni/bin/\\\\n2026-01-23T18:01:19Z [verbose] multus-daemon started\\\\n2026-01-23T18:01:19Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:02:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:02:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.754465 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.768605 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.783129 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.796533 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.818474 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.829138 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.829194 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.829204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.829225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.829238 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:27Z","lastTransitionTime":"2026-01-23T18:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.836467 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.848925 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.865169 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.932269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.932312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.932325 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.932342 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:27 crc kubenswrapper[4760]: I0123 18:02:27.932354 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:27Z","lastTransitionTime":"2026-01-23T18:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.035057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.035133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.035155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.035183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.035209 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:28Z","lastTransitionTime":"2026-01-23T18:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.138014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.138094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.138110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.138133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.138150 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:28Z","lastTransitionTime":"2026-01-23T18:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.240852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.240941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.240955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.240973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.240993 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:28Z","lastTransitionTime":"2026-01-23T18:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.342932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.343007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.343039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.343056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.343065 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:28Z","lastTransitionTime":"2026-01-23T18:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.446046 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.446098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.446106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.446120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.446130 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:28Z","lastTransitionTime":"2026-01-23T18:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.549087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.549169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.549186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.549203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.549215 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:28Z","lastTransitionTime":"2026-01-23T18:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.573019 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:38:19.361378132 +0000 UTC Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.594526 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.594526 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:28 crc kubenswrapper[4760]: E0123 18:02:28.594744 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:28 crc kubenswrapper[4760]: E0123 18:02:28.595120 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.595715 4760 scope.go:117] "RemoveContainer" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:02:28 crc kubenswrapper[4760]: E0123 18:02:28.595915 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.610859 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.625125 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.638554 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.651761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.651804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.651817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.651833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.651846 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:28Z","lastTransitionTime":"2026-01-23T18:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.651919 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.673122 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.687574 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.698398 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.711830 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.722533 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.734875 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f596bc-e797-4212-a922-05d4f490ea0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34fdcb71cc1645d02a4a3eb4c244062e7defffc5fe2029c76bcfd46d69bb35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616ef6c428fe29187caae557f2c38644021a3b4abe827c7ff52bdea50884034b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616ef6c428fe29187caae557f2c38644021a3b4abe827c7ff52bdea50884034b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.750375 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.754462 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.754516 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.754530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.754551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.754566 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:28Z","lastTransitionTime":"2026-01-23T18:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.763711 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.788796 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:17Z\\\",\\\"message\\\":\\\"nsuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0123 18:02:16.531969 6863 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0123 18:02:16.531980 6863 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0123 18:02:16.532004 6863 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0123 18:02:16.531922 6863 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:02:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.804843 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"2026-01-23T18:01:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec\\\\n2026-01-23T18:01:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec to /host/opt/cni/bin/\\\\n2026-01-23T18:01:19Z [verbose] multus-daemon started\\\\n2026-01-23T18:01:19Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:02:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:02:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.822932 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.836892 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.849256 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.856492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.856560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.856572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.856613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.856626 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:28Z","lastTransitionTime":"2026-01-23T18:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.864904 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.876058 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:28Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.959446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.959493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.959510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.959526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:28 crc kubenswrapper[4760]: I0123 18:02:28.959538 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:28Z","lastTransitionTime":"2026-01-23T18:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.061912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.061973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.061985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.062000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.062012 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:29Z","lastTransitionTime":"2026-01-23T18:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.165219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.165258 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.165271 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.165286 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.165298 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:29Z","lastTransitionTime":"2026-01-23T18:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.267602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.267675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.267690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.267710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.267726 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:29Z","lastTransitionTime":"2026-01-23T18:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.370580 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.370657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.370681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.370708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.370736 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:29Z","lastTransitionTime":"2026-01-23T18:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.473768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.473864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.473880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.473906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.473924 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:29Z","lastTransitionTime":"2026-01-23T18:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.573964 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:53:48.720615096 +0000 UTC Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.576914 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.576992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.577008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.577028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.577040 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:29Z","lastTransitionTime":"2026-01-23T18:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.594234 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:29 crc kubenswrapper[4760]: E0123 18:02:29.594374 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.594250 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:29 crc kubenswrapper[4760]: E0123 18:02:29.594531 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.679263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.679311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.679320 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.679334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.679344 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:29Z","lastTransitionTime":"2026-01-23T18:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.782390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.782449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.782461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.782479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.782488 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:29Z","lastTransitionTime":"2026-01-23T18:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.885072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.885146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.885161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.885191 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.885209 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:29Z","lastTransitionTime":"2026-01-23T18:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.988093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.988132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.988142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.988156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:29 crc kubenswrapper[4760]: I0123 18:02:29.988167 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:29Z","lastTransitionTime":"2026-01-23T18:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.090263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.090315 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.090328 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.090347 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.090357 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.192926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.192967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.192976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.192993 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.193003 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.296308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.296340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.296352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.296367 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.296381 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.399440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.399519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.399532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.399561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.399575 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.502959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.503027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.503044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.503066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.503082 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.574996 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:18:20.628540422 +0000 UTC Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.594513 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:30 crc kubenswrapper[4760]: E0123 18:02:30.594623 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.594514 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:30 crc kubenswrapper[4760]: E0123 18:02:30.594952 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.605643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.605673 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.605719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.605744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.605757 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.618080 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.618107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.618115 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.618125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.618133 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: E0123 18:02:30.635509 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.639668 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.639716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.639749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.639764 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.639773 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: E0123 18:02:30.651656 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.655114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.655145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.655153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.655167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.655177 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: E0123 18:02:30.666580 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.669692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.669738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.669747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.669761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.669771 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: E0123 18:02:30.682844 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.686427 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.686464 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.686477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.686495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.686508 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: E0123 18:02:30.700282 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0551ff4f-58bc-46b1-acf7-c08b9bc381c4\\\",\\\"systemUUID\\\":\\\"9c1e1f52-8483-49f8-b4b1-1ac575f28e02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:30Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:30 crc kubenswrapper[4760]: E0123 18:02:30.700466 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.707720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.707751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.707759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.707771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.707779 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.810681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.810719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.810730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.810746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.810758 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.913252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.913295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.913304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.913316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:30 crc kubenswrapper[4760]: I0123 18:02:30.913326 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:30Z","lastTransitionTime":"2026-01-23T18:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.020247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.020300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.020314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.020334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.020349 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:31Z","lastTransitionTime":"2026-01-23T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.123790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.123859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.123871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.123892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.123909 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:31Z","lastTransitionTime":"2026-01-23T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.226673 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.226727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.226737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.227057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.227074 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:31Z","lastTransitionTime":"2026-01-23T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.330279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.330377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.330392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.330448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.330469 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:31Z","lastTransitionTime":"2026-01-23T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.432911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.432981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.432997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.433018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.433031 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:31Z","lastTransitionTime":"2026-01-23T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.536248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.536324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.536340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.536360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.536377 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:31Z","lastTransitionTime":"2026-01-23T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.575466 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:52:33.318432774 +0000 UTC Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.595049 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.595126 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:31 crc kubenswrapper[4760]: E0123 18:02:31.595193 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:31 crc kubenswrapper[4760]: E0123 18:02:31.595288 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.639014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.639057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.639066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.639079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.639088 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:31Z","lastTransitionTime":"2026-01-23T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.742119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.742160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.742169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.742189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.742206 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:31Z","lastTransitionTime":"2026-01-23T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.844098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.844143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.844153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.844166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.844176 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:31Z","lastTransitionTime":"2026-01-23T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.946812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.946868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.946883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.946905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:31 crc kubenswrapper[4760]: I0123 18:02:31.946920 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:31Z","lastTransitionTime":"2026-01-23T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.049105 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.049181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.049192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.049207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.049226 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:32Z","lastTransitionTime":"2026-01-23T18:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.152129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.152165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.152174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.152188 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.152196 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:32Z","lastTransitionTime":"2026-01-23T18:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.254932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.254975 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.254987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.255005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.255017 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:32Z","lastTransitionTime":"2026-01-23T18:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.358086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.358123 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.358132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.358144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.358155 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:32Z","lastTransitionTime":"2026-01-23T18:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.461442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.461491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.461503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.461522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.461534 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:32Z","lastTransitionTime":"2026-01-23T18:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.564615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.564693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.564709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.564730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.564743 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:32Z","lastTransitionTime":"2026-01-23T18:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.575860 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:37:38.069714444 +0000 UTC Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.594491 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.594529 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:32 crc kubenswrapper[4760]: E0123 18:02:32.594691 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:32 crc kubenswrapper[4760]: E0123 18:02:32.594924 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.667639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.667679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.667694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.667711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.667722 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:32Z","lastTransitionTime":"2026-01-23T18:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.772277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.772310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.772320 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.772334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.772344 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:32Z","lastTransitionTime":"2026-01-23T18:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.874900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.874938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.874949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.874964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.874977 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:32Z","lastTransitionTime":"2026-01-23T18:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.977121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.977171 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.977182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.977200 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:32 crc kubenswrapper[4760]: I0123 18:02:32.977212 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:32Z","lastTransitionTime":"2026-01-23T18:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.079567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.079613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.079626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.079648 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.079661 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:33Z","lastTransitionTime":"2026-01-23T18:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.182088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.182129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.182141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.182160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.182172 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:33Z","lastTransitionTime":"2026-01-23T18:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.284718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.284760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.284772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.284788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.284800 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:33Z","lastTransitionTime":"2026-01-23T18:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.387451 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.387524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.387543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.387561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.387573 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:33Z","lastTransitionTime":"2026-01-23T18:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.490374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.490477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.490495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.490520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.490536 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:33Z","lastTransitionTime":"2026-01-23T18:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.576509 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:31:55.572553876 +0000 UTC Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.594215 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.594259 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:33 crc kubenswrapper[4760]: E0123 18:02:33.594317 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:33 crc kubenswrapper[4760]: E0123 18:02:33.594461 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.595443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.595502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.595525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.595555 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.595576 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:33Z","lastTransitionTime":"2026-01-23T18:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.698423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.698465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.698477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.698494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.698503 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:33Z","lastTransitionTime":"2026-01-23T18:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.802477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.802544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.802557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.802577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.802589 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:33Z","lastTransitionTime":"2026-01-23T18:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.905204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.905235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.905245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.905258 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:33 crc kubenswrapper[4760]: I0123 18:02:33.905267 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:33Z","lastTransitionTime":"2026-01-23T18:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.007575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.007609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.007621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.007636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.007647 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:34Z","lastTransitionTime":"2026-01-23T18:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.109615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.109661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.109675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.109693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.109705 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:34Z","lastTransitionTime":"2026-01-23T18:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.211809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.211853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.211864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.211879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.211890 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:34Z","lastTransitionTime":"2026-01-23T18:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.313764 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.313849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.313875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.313908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.313930 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:34Z","lastTransitionTime":"2026-01-23T18:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.417821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.417922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.417943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.417994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.418012 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:34Z","lastTransitionTime":"2026-01-23T18:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.521537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.521572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.521580 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.521593 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.521603 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:34Z","lastTransitionTime":"2026-01-23T18:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.577615 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:35:57.294612107 +0000 UTC Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.594944 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.594955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:34 crc kubenswrapper[4760]: E0123 18:02:34.595282 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:34 crc kubenswrapper[4760]: E0123 18:02:34.595428 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.623356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.623383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.623392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.623418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.623427 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:34Z","lastTransitionTime":"2026-01-23T18:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.727745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.727783 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.727795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.727812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.727824 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:34Z","lastTransitionTime":"2026-01-23T18:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.771708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:34 crc kubenswrapper[4760]: E0123 18:02:34.771858 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:02:34 crc kubenswrapper[4760]: E0123 18:02:34.771943 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs podName:009bf3d0-1239-4b72-8a29-8b5e5964bdac nodeName:}" failed. No retries permitted until 2026-01-23 18:03:38.771920644 +0000 UTC m=+161.774378597 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs") pod "network-metrics-daemon-sw8p8" (UID: "009bf3d0-1239-4b72-8a29-8b5e5964bdac") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.829959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.830308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.830374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.830541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.830624 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:34Z","lastTransitionTime":"2026-01-23T18:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.932968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.932997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.933005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.933018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:34 crc kubenswrapper[4760]: I0123 18:02:34.933028 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:34Z","lastTransitionTime":"2026-01-23T18:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.035842 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.035888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.035902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.035918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.035931 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:35Z","lastTransitionTime":"2026-01-23T18:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.138780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.138832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.138843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.138860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.138872 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:35Z","lastTransitionTime":"2026-01-23T18:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.241572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.241617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.241629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.241645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.241658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:35Z","lastTransitionTime":"2026-01-23T18:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.343756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.343990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.344120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.344218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.344299 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:35Z","lastTransitionTime":"2026-01-23T18:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.447571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.448031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.448265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.448574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.448803 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:35Z","lastTransitionTime":"2026-01-23T18:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.551809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.551900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.551950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.551975 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.551993 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:35Z","lastTransitionTime":"2026-01-23T18:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.578174 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:25:33.41957061 +0000 UTC Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.594642 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.595124 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:35 crc kubenswrapper[4760]: E0123 18:02:35.595188 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:35 crc kubenswrapper[4760]: E0123 18:02:35.596122 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.654903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.655207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.655355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.655394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.655430 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:35Z","lastTransitionTime":"2026-01-23T18:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.757769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.757842 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.757861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.757884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.757971 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:35Z","lastTransitionTime":"2026-01-23T18:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.861033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.861084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.861094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.861110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.861122 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:35Z","lastTransitionTime":"2026-01-23T18:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.964211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.964268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.964283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.964305 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:35 crc kubenswrapper[4760]: I0123 18:02:35.964318 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:35Z","lastTransitionTime":"2026-01-23T18:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.066663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.066722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.066746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.066776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.066798 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:36Z","lastTransitionTime":"2026-01-23T18:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.169024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.169084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.169101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.169125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.169143 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:36Z","lastTransitionTime":"2026-01-23T18:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.272812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.272851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.272889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.272910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.272922 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:36Z","lastTransitionTime":"2026-01-23T18:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.376283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.376326 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.376338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.376354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.376366 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:36Z","lastTransitionTime":"2026-01-23T18:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.478528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.478568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.478585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.478606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.478618 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:36Z","lastTransitionTime":"2026-01-23T18:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.578556 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:55:42.624698073 +0000 UTC Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.581559 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.581631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.581658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.581688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.581710 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:36Z","lastTransitionTime":"2026-01-23T18:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.594437 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.594463 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:36 crc kubenswrapper[4760]: E0123 18:02:36.594576 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:36 crc kubenswrapper[4760]: E0123 18:02:36.595173 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.684587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.684649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.684661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.684678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.684691 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:36Z","lastTransitionTime":"2026-01-23T18:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.786990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.787031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.787042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.787061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.787072 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:36Z","lastTransitionTime":"2026-01-23T18:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.890178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.890242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.890264 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.890295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.890317 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:36Z","lastTransitionTime":"2026-01-23T18:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.993594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.993642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.993657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.993678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:36 crc kubenswrapper[4760]: I0123 18:02:36.993690 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:36Z","lastTransitionTime":"2026-01-23T18:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.096358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.096428 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.096440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.096456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.096467 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:37Z","lastTransitionTime":"2026-01-23T18:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.199470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.199548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.199576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.199607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.199634 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:37Z","lastTransitionTime":"2026-01-23T18:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.302819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.302893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.302912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.302935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.302952 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:37Z","lastTransitionTime":"2026-01-23T18:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.406568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.406605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.406620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.406633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.406642 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:37Z","lastTransitionTime":"2026-01-23T18:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.508565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.508602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.508612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.508626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.508636 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:37Z","lastTransitionTime":"2026-01-23T18:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.578751 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:17:02.892423771 +0000 UTC Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.594658 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.594699 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:37 crc kubenswrapper[4760]: E0123 18:02:37.594884 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:37 crc kubenswrapper[4760]: E0123 18:02:37.595034 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.611193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.611216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.611225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.611236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.611245 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:37Z","lastTransitionTime":"2026-01-23T18:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.615014 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4862a70-7160-4a61-a1d3-30e5698f2671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec220b0257a8735c4b7241934ca6baa6b914d7152f2bdf453830afd405397ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57870acd008b1f51741285fac995cbea1415912d85dfc07c1c5fa176a24f915f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20008fb4d61125bacd440fcccc412006d024f66954fec6536d110cc28a6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca17297ca2c544967a58a497c53ab53e2f607e7f1cf8e7f2fbb5dd2266bd338\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.633351 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.653546 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4248f1b0-4c19-49fb-b387-45b156eb3ce6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846214569f0617dcfed48aa9041a19dfa7af3299fac608fc4c355c94cbcda577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e10ba08d3467243dba15e5fb632f60e1d86f8f117de8efbc97f86836ef2e700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb1bbfa445451a46e9467440de969d27b77991d2bcf60c87546342167630e451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd1ed9f5577bf74aebec3acfaa8ee9d2b3be05eb1b71189e1a03f754513aa833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec0ecec870463b1ca4a55ee39516116865b18dcaf3472e7e10e1b8c89fc6641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40be2b4c12d2459e9ddb078d81e2925b8cbc47010830e11e524283d9c5a873da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0cce7f4def43de4f75be1791e5a18e929f57202cba5ef86bdae7a5c281fa35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcf04ad1dd1518a03175f22eaa7c01f8d95622762c40d84626a0c0fcfa155dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.666867 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2945bd86-2fdb-4f91-9b5c-a2a1f65193e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9161116266c07a510b29aaa83da8528e035ad4b47eafefeff04b899c1240871d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3195c1a6079a2b7af2bd1853d436c5c3cb5702e6e50a5126d9854f7c2a399a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead5c641dea69a41f7c39ebb1ee9cd80b768b5810bc9ba278541557c1015f8e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.679658 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c7f45fdc4e46bc32e8600b224628a31e8bf5743fb5d8d27cce987667db73e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.694905 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03a394da-f311-4268-9011-d781ba14cb3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:17Z\\\",\\\"message\\\":\\\"nsuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0123 18:02:16.531969 6863 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0123 18:02:16.531980 6863 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0123 18:02:16.532004 6863 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0123 18:02:16.531922 6863 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:02:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-528cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-58zkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.705600 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"009bf3d0-1239-4b72-8a29-8b5e5964bdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qw95f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.713662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.713872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.713994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.714148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.714314 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:37Z","lastTransitionTime":"2026-01-23T18:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.717596 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f596bc-e797-4212-a922-05d4f490ea0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34fdcb71cc1645d02a4a3eb4c244062e7defffc5fe2029c76bcfd46d69bb35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616ef6c428fe29187caae557f2c38644021a3b4abe827c7ff52bdea50884034b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616ef6c428fe29187caae557f2c38644021a3b4abe827c7ff52bdea50884034b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.732033 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90406fa4-aa7b-4dcd-a275-255c5b4a38b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:01:15.264325 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:01:15.264475 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:01:15.312991 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2498406710/tls.crt::/tmp/serving-cert-2498406710/tls.key\\\\\\\"\\\\nI0123 18:01:15.739257 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:01:15.741737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:01:15.741755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:01:15.741781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:01:15.741787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:01:15.746260 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:01:15.746277 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:01:15.746277 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:01:15.746281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:01:15.746326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:01:15.746329 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:01:15.746332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:01:15.746335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:01:15.748085 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:00:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.742903 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6872f59896eb5a4f89136e4e31d416414c4091ce46950b60609a971696b51b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67be2bb0bb02aa3abe0d6c2c19a74ca1a8960b16cc9643cccec7d6792066d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.756208 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vgrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f722071-172e-4ab9-9cf5-67e13dfe9aea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4179eae788a96d457716bac68b6c0ebd026cfc2c527cb9db0e6747fb147f4040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-446tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vgrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.771870 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7ck54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac96490a-85b1-48f4-99d1-2b7505744007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:02:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:02:04Z\\\",\\\"message\\\":\\\"2026-01-23T18:01:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec\\\\n2026-01-23T18:01:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51693df6-6aae-4f4e-962b-a2398ca972ec to /host/opt/cni/bin/\\\\n2026-01-23T18:01:19Z [verbose] multus-daemon started\\\\n2026-01-23T18:01:19Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:02:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:02:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4b4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7ck54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.791809 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66s9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0f369c-16f4-4156-9b96-cef4c4fad7db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9448beda4ea5c3fe127d3ff6ee47f8ac8206a999ddc3dc461609086cc58cb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e98696d7f0de3ed1e62e04d2148a2ebc8dd49c9fa5cf5a9fb78a58bcca7365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535fe46d84f3c2545b47eab73df931af3d05312dfd612bc4b89e4e0fe0cfa67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53cc9f36a24dc75181c207383fb92359c5e048f04002c93184157ff293eb666\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b888246efbe6db35c0bbecf2c3fe61b023287148c5eb66a640d9e1318a7c52f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d9fae5deb79f5b82ac7cea77112eeb7610a252b1270b46ab3922e29caa1001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25280d27de3a9f9f1803ed3f061b53f9a7e497d9d8f50f2bcda77d90e0b56b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hjgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66s9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.810095 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.817098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.817160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.817183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.817214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.817239 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:37Z","lastTransitionTime":"2026-01-23T18:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.825549 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9173b186311029e2a6a185e7af7678b351ac9c301fb976d0bf0ffd27622dfae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.840868 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20652c61-310f-464d-ae66-dfc025a16b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a6a6cc87b9b7993bcfe450ebdf0d131a95ad52520fb7ccaed090aaff97c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2hj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xsk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.856069 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f1f91ba-af52-43da-9fe4-d146e9ccb228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24104ee7f509a42bdb1a3dbc53453609a57b9a1540600e85bc469f7fc7637239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff982a81dbb716d7c64121158990d57f01aab9fe789cb0901c49a2d76efc25fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g76sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f296v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.873593 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.886644 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h6qwf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c0c68a-6978-4fa1-82c6-3fb16bcce76b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3631544fc3937af4c36aafd5719dc324791f75406053e9c57f0214915fe7a29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5f97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:01:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h6qwf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:02:37Z is after 2025-08-24T17:21:41Z" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.919828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.919874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.919892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.919913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:37 crc kubenswrapper[4760]: I0123 18:02:37.919930 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:37Z","lastTransitionTime":"2026-01-23T18:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.022028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.022063 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.022072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.022087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.022098 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:38Z","lastTransitionTime":"2026-01-23T18:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.124760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.124832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.124850 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.124874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.124891 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:38Z","lastTransitionTime":"2026-01-23T18:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.227553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.227592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.227603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.227617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.227629 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:38Z","lastTransitionTime":"2026-01-23T18:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.330096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.330401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.330518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.330637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.330763 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:38Z","lastTransitionTime":"2026-01-23T18:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.434097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.434162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.434179 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.434768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.434847 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:38Z","lastTransitionTime":"2026-01-23T18:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.537359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.537451 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.537470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.537494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.537513 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:38Z","lastTransitionTime":"2026-01-23T18:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.579921 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:23:36.437155247 +0000 UTC Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.594248 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.594317 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:38 crc kubenswrapper[4760]: E0123 18:02:38.594506 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:38 crc kubenswrapper[4760]: E0123 18:02:38.594784 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.639748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.639788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.639806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.639825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.639839 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:38Z","lastTransitionTime":"2026-01-23T18:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.741630 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.741662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.741674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.741690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.741701 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:38Z","lastTransitionTime":"2026-01-23T18:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.844083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.844121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.844132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.844146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.844158 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:38Z","lastTransitionTime":"2026-01-23T18:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.946871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.946971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.946998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.947023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:38 crc kubenswrapper[4760]: I0123 18:02:38.947040 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:38Z","lastTransitionTime":"2026-01-23T18:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.049836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.049885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.049895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.049911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.049923 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:39Z","lastTransitionTime":"2026-01-23T18:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.152803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.152872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.152889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.152912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.152927 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:39Z","lastTransitionTime":"2026-01-23T18:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.255827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.255872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.255882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.255902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.255915 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:39Z","lastTransitionTime":"2026-01-23T18:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.358819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.358854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.358862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.358875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.358884 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:39Z","lastTransitionTime":"2026-01-23T18:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.462155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.462213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.462230 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.462253 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.462275 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:39Z","lastTransitionTime":"2026-01-23T18:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.566203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.566284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.566305 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.566336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.566356 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:39Z","lastTransitionTime":"2026-01-23T18:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.580546 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:41:30.855661392 +0000 UTC Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.595261 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.595954 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:39 crc kubenswrapper[4760]: E0123 18:02:39.596189 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:39 crc kubenswrapper[4760]: E0123 18:02:39.596261 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.669081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.669283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.669308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.669331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.669351 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:39Z","lastTransitionTime":"2026-01-23T18:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.772695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.772748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.772760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.772776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.772789 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:39Z","lastTransitionTime":"2026-01-23T18:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.875987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.876051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.876060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.876074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.876085 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:39Z","lastTransitionTime":"2026-01-23T18:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.978760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.978821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.978836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.978856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:39 crc kubenswrapper[4760]: I0123 18:02:39.978868 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:39Z","lastTransitionTime":"2026-01-23T18:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.081866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.081932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.081961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.081979 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.081989 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:40Z","lastTransitionTime":"2026-01-23T18:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.184705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.184745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.184755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.184769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.184778 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:40Z","lastTransitionTime":"2026-01-23T18:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.287353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.287454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.287474 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.287504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.287523 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:40Z","lastTransitionTime":"2026-01-23T18:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.390234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.390291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.390303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.390321 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.390334 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:40Z","lastTransitionTime":"2026-01-23T18:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.493078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.493117 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.493128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.493143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.493153 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:40Z","lastTransitionTime":"2026-01-23T18:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.580997 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:16:33.111271385 +0000 UTC Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.594503 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.594835 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:40 crc kubenswrapper[4760]: E0123 18:02:40.595027 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:40 crc kubenswrapper[4760]: E0123 18:02:40.595115 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.596039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.596077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.596089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.596107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.596120 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:40Z","lastTransitionTime":"2026-01-23T18:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.698756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.698803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.698817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.698833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.698846 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:40Z","lastTransitionTime":"2026-01-23T18:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.801077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.801144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.801165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.801211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.801238 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:40Z","lastTransitionTime":"2026-01-23T18:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.905619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.905697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.905719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.905747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.905770 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:40Z","lastTransitionTime":"2026-01-23T18:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.956710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.956759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.956781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.956799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:02:40 crc kubenswrapper[4760]: I0123 18:02:40.956814 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:02:40Z","lastTransitionTime":"2026-01-23T18:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.014104 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7"] Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.014558 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.017299 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.017355 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.017384 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.018180 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.041145 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e7e84420-2686-4f52-a2d6-bcbc922db157-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.041259 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e7e84420-2686-4f52-a2d6-bcbc922db157-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.041331 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e7e84420-2686-4f52-a2d6-bcbc922db157-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.041371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e84420-2686-4f52-a2d6-bcbc922db157-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.041460 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e84420-2686-4f52-a2d6-bcbc922db157-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.089895 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=24.08987639 podStartE2EDuration="24.08987639s" podCreationTimestamp="2026-01-23 18:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.089737766 +0000 UTC m=+104.092195699" watchObservedRunningTime="2026-01-23 18:02:41.08987639 +0000 UTC m=+104.092334323" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.103146 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.103128287 podStartE2EDuration="1m25.103128287s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.10252834 +0000 UTC m=+104.104986293" watchObservedRunningTime="2026-01-23 18:02:41.103128287 +0000 UTC m=+104.105586220" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.133612 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vgrsn" podStartSLOduration=85.133593742 podStartE2EDuration="1m25.133593742s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.13314991 +0000 UTC m=+104.135607853" watchObservedRunningTime="2026-01-23 18:02:41.133593742 +0000 UTC m=+104.136051675" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.142578 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e84420-2686-4f52-a2d6-bcbc922db157-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.142639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e84420-2686-4f52-a2d6-bcbc922db157-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.142674 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e7e84420-2686-4f52-a2d6-bcbc922db157-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.142720 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e7e84420-2686-4f52-a2d6-bcbc922db157-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.142740 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e7e84420-2686-4f52-a2d6-bcbc922db157-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.142794 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e7e84420-2686-4f52-a2d6-bcbc922db157-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.142816 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e7e84420-2686-4f52-a2d6-bcbc922db157-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.143736 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e7e84420-2686-4f52-a2d6-bcbc922db157-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.147795 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7ck54" podStartSLOduration=85.147773045 podStartE2EDuration="1m25.147773045s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.147356503 +0000 UTC m=+104.149814446" watchObservedRunningTime="2026-01-23 18:02:41.147773045 +0000 UTC m=+104.150230978" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.152105 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e84420-2686-4f52-a2d6-bcbc922db157-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.162258 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e84420-2686-4f52-a2d6-bcbc922db157-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nfzs7\" (UID: \"e7e84420-2686-4f52-a2d6-bcbc922db157\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.166597 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-66s9m" podStartSLOduration=85.16657595 podStartE2EDuration="1m25.16657595s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.16624858 +0000 UTC m=+104.168706553" watchObservedRunningTime="2026-01-23 18:02:41.16657595 +0000 UTC m=+104.169033893" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.198577 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podStartSLOduration=85.198558188 podStartE2EDuration="1m25.198558188s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.198053214 +0000 UTC m=+104.200511157" watchObservedRunningTime="2026-01-23 18:02:41.198558188 +0000 UTC m=+104.201016131" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.209378 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f296v" podStartSLOduration=84.209362835 podStartE2EDuration="1m24.209362835s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.209261792 +0000 UTC m=+104.211719745" watchObservedRunningTime="2026-01-23 18:02:41.209362835 +0000 UTC m=+104.211820768" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.248632 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-h6qwf" podStartSLOduration=85.24861511 podStartE2EDuration="1m25.24861511s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.232869052 +0000 UTC m=+104.235326995" watchObservedRunningTime="2026-01-23 18:02:41.24861511 +0000 UTC m=+104.251073043" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.249222 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.249216927 podStartE2EDuration="55.249216927s" podCreationTimestamp="2026-01-23 18:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.248096375 +0000 UTC m=+104.250554318" watchObservedRunningTime="2026-01-23 18:02:41.249216927 +0000 UTC m=+104.251674860" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.292206 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.292189408 podStartE2EDuration="1m26.292189408s" podCreationTimestamp="2026-01-23 18:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.291247711 +0000 UTC m=+104.293705644" watchObservedRunningTime="2026-01-23 18:02:41.292189408 +0000 UTC m=+104.294647341" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.308240 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.308224323 podStartE2EDuration="1m26.308224323s" podCreationTimestamp="2026-01-23 18:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:41.307529523 +0000 UTC m=+104.309987456" watchObservedRunningTime="2026-01-23 18:02:41.308224323 +0000 UTC m=+104.310682256" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.335715 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.581635 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:00:16.35850118 +0000 UTC Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.581700 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.589506 4760 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.594853 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:41 crc kubenswrapper[4760]: I0123 18:02:41.594891 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:41 crc kubenswrapper[4760]: E0123 18:02:41.594986 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:41 crc kubenswrapper[4760]: E0123 18:02:41.595113 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:42 crc kubenswrapper[4760]: I0123 18:02:42.133653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" event={"ID":"e7e84420-2686-4f52-a2d6-bcbc922db157","Type":"ContainerStarted","Data":"0c9f3b210d0b2c9e988964f6e1dacf5872c5aa043c3563132d98546ee0111c64"} Jan 23 18:02:42 crc kubenswrapper[4760]: I0123 18:02:42.133709 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" event={"ID":"e7e84420-2686-4f52-a2d6-bcbc922db157","Type":"ContainerStarted","Data":"a424356c8af8a2ca08bf3e82d6e33d10cd382b7baaacc877a51f3d3a32b6f42f"} Jan 23 18:02:42 crc kubenswrapper[4760]: I0123 18:02:42.594459 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:42 crc kubenswrapper[4760]: I0123 18:02:42.594505 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:42 crc kubenswrapper[4760]: E0123 18:02:42.594941 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:42 crc kubenswrapper[4760]: E0123 18:02:42.595107 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:42 crc kubenswrapper[4760]: I0123 18:02:42.595278 4760 scope.go:117] "RemoveContainer" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:02:42 crc kubenswrapper[4760]: E0123 18:02:42.595460 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" Jan 23 18:02:43 crc kubenswrapper[4760]: I0123 18:02:43.594288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:43 crc kubenswrapper[4760]: I0123 18:02:43.594468 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:43 crc kubenswrapper[4760]: E0123 18:02:43.594642 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:43 crc kubenswrapper[4760]: E0123 18:02:43.594955 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:44 crc kubenswrapper[4760]: I0123 18:02:44.594253 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:44 crc kubenswrapper[4760]: I0123 18:02:44.594874 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:44 crc kubenswrapper[4760]: E0123 18:02:44.594998 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:44 crc kubenswrapper[4760]: E0123 18:02:44.595074 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:45 crc kubenswrapper[4760]: I0123 18:02:45.595148 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:45 crc kubenswrapper[4760]: I0123 18:02:45.595218 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:45 crc kubenswrapper[4760]: E0123 18:02:45.595281 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:45 crc kubenswrapper[4760]: E0123 18:02:45.595481 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:46 crc kubenswrapper[4760]: I0123 18:02:46.595012 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:46 crc kubenswrapper[4760]: I0123 18:02:46.595167 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:46 crc kubenswrapper[4760]: E0123 18:02:46.595566 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:46 crc kubenswrapper[4760]: E0123 18:02:46.595789 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:47 crc kubenswrapper[4760]: I0123 18:02:47.594556 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:47 crc kubenswrapper[4760]: E0123 18:02:47.595965 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:47 crc kubenswrapper[4760]: I0123 18:02:47.596075 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:47 crc kubenswrapper[4760]: E0123 18:02:47.596287 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:48 crc kubenswrapper[4760]: I0123 18:02:48.595168 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:48 crc kubenswrapper[4760]: E0123 18:02:48.595641 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:48 crc kubenswrapper[4760]: I0123 18:02:48.595213 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:48 crc kubenswrapper[4760]: E0123 18:02:48.595876 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:49 crc kubenswrapper[4760]: I0123 18:02:49.595098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:49 crc kubenswrapper[4760]: I0123 18:02:49.595255 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:49 crc kubenswrapper[4760]: E0123 18:02:49.595362 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:49 crc kubenswrapper[4760]: E0123 18:02:49.595537 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:50 crc kubenswrapper[4760]: I0123 18:02:50.594808 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:50 crc kubenswrapper[4760]: I0123 18:02:50.594832 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:50 crc kubenswrapper[4760]: E0123 18:02:50.594958 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:50 crc kubenswrapper[4760]: E0123 18:02:50.595084 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:51 crc kubenswrapper[4760]: I0123 18:02:51.594558 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:51 crc kubenswrapper[4760]: I0123 18:02:51.594634 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:51 crc kubenswrapper[4760]: E0123 18:02:51.594889 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:51 crc kubenswrapper[4760]: E0123 18:02:51.595184 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:52 crc kubenswrapper[4760]: I0123 18:02:52.168603 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/1.log" Jan 23 18:02:52 crc kubenswrapper[4760]: I0123 18:02:52.169569 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/0.log" Jan 23 18:02:52 crc kubenswrapper[4760]: I0123 18:02:52.169657 4760 generic.go:334] "Generic (PLEG): container finished" podID="ac96490a-85b1-48f4-99d1-2b7505744007" containerID="02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b" exitCode=1 Jan 23 18:02:52 crc kubenswrapper[4760]: I0123 18:02:52.169714 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7ck54" event={"ID":"ac96490a-85b1-48f4-99d1-2b7505744007","Type":"ContainerDied","Data":"02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b"} Jan 23 18:02:52 crc kubenswrapper[4760]: I0123 18:02:52.169782 4760 scope.go:117] "RemoveContainer" containerID="02c87a22fd8ab48968939d4332d7dbd7cd07efacc5a97f18d8a04af60a11f216" Jan 23 18:02:52 crc kubenswrapper[4760]: I0123 18:02:52.170254 4760 scope.go:117] "RemoveContainer" containerID="02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b" Jan 23 18:02:52 crc kubenswrapper[4760]: E0123 18:02:52.170528 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-7ck54_openshift-multus(ac96490a-85b1-48f4-99d1-2b7505744007)\"" pod="openshift-multus/multus-7ck54" podUID="ac96490a-85b1-48f4-99d1-2b7505744007" Jan 23 18:02:52 crc kubenswrapper[4760]: I0123 18:02:52.188802 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nfzs7" podStartSLOduration=96.188774118 podStartE2EDuration="1m36.188774118s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:02:42.147831504 +0000 UTC m=+105.150289437" watchObservedRunningTime="2026-01-23 18:02:52.188774118 +0000 UTC m=+115.191232091" Jan 23 18:02:52 crc kubenswrapper[4760]: I0123 18:02:52.594454 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:52 crc kubenswrapper[4760]: I0123 18:02:52.594509 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:52 crc kubenswrapper[4760]: E0123 18:02:52.594586 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:52 crc kubenswrapper[4760]: E0123 18:02:52.594723 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:53 crc kubenswrapper[4760]: I0123 18:02:53.173163 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/1.log" Jan 23 18:02:53 crc kubenswrapper[4760]: I0123 18:02:53.594574 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:53 crc kubenswrapper[4760]: E0123 18:02:53.594741 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:53 crc kubenswrapper[4760]: I0123 18:02:53.594601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:53 crc kubenswrapper[4760]: E0123 18:02:53.595120 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:54 crc kubenswrapper[4760]: I0123 18:02:54.595139 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:54 crc kubenswrapper[4760]: I0123 18:02:54.595147 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:54 crc kubenswrapper[4760]: E0123 18:02:54.595328 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:54 crc kubenswrapper[4760]: E0123 18:02:54.595541 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:55 crc kubenswrapper[4760]: I0123 18:02:55.594729 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:55 crc kubenswrapper[4760]: E0123 18:02:55.594881 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:55 crc kubenswrapper[4760]: I0123 18:02:55.594759 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:55 crc kubenswrapper[4760]: E0123 18:02:55.595259 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:55 crc kubenswrapper[4760]: I0123 18:02:55.595605 4760 scope.go:117] "RemoveContainer" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:02:55 crc kubenswrapper[4760]: E0123 18:02:55.595798 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-58zkr_openshift-ovn-kubernetes(03a394da-f311-4268-9011-d781ba14cb3f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" Jan 23 18:02:56 crc kubenswrapper[4760]: I0123 18:02:56.594539 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:56 crc kubenswrapper[4760]: E0123 18:02:56.594669 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:56 crc kubenswrapper[4760]: I0123 18:02:56.594542 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:56 crc kubenswrapper[4760]: E0123 18:02:56.594835 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:57 crc kubenswrapper[4760]: E0123 18:02:57.579015 4760 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 18:02:57 crc kubenswrapper[4760]: I0123 18:02:57.594505 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:57 crc kubenswrapper[4760]: I0123 18:02:57.594539 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:57 crc kubenswrapper[4760]: E0123 18:02:57.596529 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:57 crc kubenswrapper[4760]: E0123 18:02:57.596619 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:02:57 crc kubenswrapper[4760]: E0123 18:02:57.649148 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 18:02:58 crc kubenswrapper[4760]: I0123 18:02:58.594309 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:02:58 crc kubenswrapper[4760]: I0123 18:02:58.594389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:02:58 crc kubenswrapper[4760]: E0123 18:02:58.594622 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:02:58 crc kubenswrapper[4760]: E0123 18:02:58.594464 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:02:59 crc kubenswrapper[4760]: I0123 18:02:59.594587 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:02:59 crc kubenswrapper[4760]: I0123 18:02:59.594635 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:02:59 crc kubenswrapper[4760]: E0123 18:02:59.594729 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:02:59 crc kubenswrapper[4760]: E0123 18:02:59.594881 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:03:00 crc kubenswrapper[4760]: I0123 18:03:00.594586 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:00 crc kubenswrapper[4760]: I0123 18:03:00.594659 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:00 crc kubenswrapper[4760]: E0123 18:03:00.594807 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:03:00 crc kubenswrapper[4760]: E0123 18:03:00.594950 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:03:01 crc kubenswrapper[4760]: I0123 18:03:01.594333 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:01 crc kubenswrapper[4760]: I0123 18:03:01.594333 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:01 crc kubenswrapper[4760]: E0123 18:03:01.594492 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:03:01 crc kubenswrapper[4760]: E0123 18:03:01.594564 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:03:02 crc kubenswrapper[4760]: I0123 18:03:02.594595 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:02 crc kubenswrapper[4760]: I0123 18:03:02.594644 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:02 crc kubenswrapper[4760]: E0123 18:03:02.594778 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:03:02 crc kubenswrapper[4760]: E0123 18:03:02.594873 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:03:02 crc kubenswrapper[4760]: E0123 18:03:02.650634 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 18:03:03 crc kubenswrapper[4760]: I0123 18:03:03.595069 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:03 crc kubenswrapper[4760]: E0123 18:03:03.595266 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:03:03 crc kubenswrapper[4760]: I0123 18:03:03.595077 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:03 crc kubenswrapper[4760]: E0123 18:03:03.595709 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:03:04 crc kubenswrapper[4760]: I0123 18:03:04.594606 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:04 crc kubenswrapper[4760]: I0123 18:03:04.594623 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:04 crc kubenswrapper[4760]: E0123 18:03:04.594873 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:03:04 crc kubenswrapper[4760]: E0123 18:03:04.594974 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:03:05 crc kubenswrapper[4760]: I0123 18:03:05.595115 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:05 crc kubenswrapper[4760]: I0123 18:03:05.595181 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:05 crc kubenswrapper[4760]: E0123 18:03:05.595279 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:03:05 crc kubenswrapper[4760]: E0123 18:03:05.595606 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:03:06 crc kubenswrapper[4760]: I0123 18:03:06.594940 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:06 crc kubenswrapper[4760]: I0123 18:03:06.595057 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:06 crc kubenswrapper[4760]: E0123 18:03:06.595224 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:03:06 crc kubenswrapper[4760]: I0123 18:03:06.595346 4760 scope.go:117] "RemoveContainer" containerID="02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b" Jan 23 18:03:06 crc kubenswrapper[4760]: E0123 18:03:06.595470 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:03:07 crc kubenswrapper[4760]: I0123 18:03:07.224211 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/1.log" Jan 23 18:03:07 crc kubenswrapper[4760]: I0123 18:03:07.224558 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7ck54" event={"ID":"ac96490a-85b1-48f4-99d1-2b7505744007","Type":"ContainerStarted","Data":"e1df9bdba069426e58b0479a2525b9c4dca95a968b7b411df11c56edc4c931cf"} Jan 23 18:03:07 crc kubenswrapper[4760]: I0123 18:03:07.594247 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:07 crc kubenswrapper[4760]: I0123 18:03:07.594300 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:07 crc kubenswrapper[4760]: E0123 18:03:07.595455 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:03:07 crc kubenswrapper[4760]: E0123 18:03:07.595543 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:03:07 crc kubenswrapper[4760]: E0123 18:03:07.651463 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 18:03:08 crc kubenswrapper[4760]: I0123 18:03:08.594798 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:08 crc kubenswrapper[4760]: I0123 18:03:08.594861 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:08 crc kubenswrapper[4760]: E0123 18:03:08.594956 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:03:08 crc kubenswrapper[4760]: E0123 18:03:08.595077 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:03:09 crc kubenswrapper[4760]: I0123 18:03:09.594237 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:09 crc kubenswrapper[4760]: I0123 18:03:09.594258 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:09 crc kubenswrapper[4760]: E0123 18:03:09.594382 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:03:09 crc kubenswrapper[4760]: E0123 18:03:09.594541 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:03:10 crc kubenswrapper[4760]: I0123 18:03:10.594725 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:10 crc kubenswrapper[4760]: I0123 18:03:10.594735 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:10 crc kubenswrapper[4760]: E0123 18:03:10.594855 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:03:10 crc kubenswrapper[4760]: E0123 18:03:10.595007 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:03:10 crc kubenswrapper[4760]: I0123 18:03:10.595565 4760 scope.go:117] "RemoveContainer" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:03:11 crc kubenswrapper[4760]: I0123 18:03:11.237591 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/3.log" Jan 23 18:03:11 crc kubenswrapper[4760]: I0123 18:03:11.240565 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerStarted","Data":"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8"} Jan 23 18:03:11 crc kubenswrapper[4760]: I0123 18:03:11.241022 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:03:11 crc kubenswrapper[4760]: I0123 18:03:11.269984 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podStartSLOduration=115.269968725 podStartE2EDuration="1m55.269968725s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:11.268998428 +0000 UTC m=+134.271456381" watchObservedRunningTime="2026-01-23 18:03:11.269968725 +0000 UTC m=+134.272426658" Jan 23 18:03:11 crc kubenswrapper[4760]: I0123 18:03:11.353859 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sw8p8"] Jan 23 18:03:11 crc kubenswrapper[4760]: I0123 18:03:11.353957 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:11 crc kubenswrapper[4760]: E0123 18:03:11.354041 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:03:11 crc kubenswrapper[4760]: I0123 18:03:11.597237 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:11 crc kubenswrapper[4760]: E0123 18:03:11.597777 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:03:11 crc kubenswrapper[4760]: I0123 18:03:11.600350 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:11 crc kubenswrapper[4760]: E0123 18:03:11.600591 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:03:12 crc kubenswrapper[4760]: I0123 18:03:12.595101 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:12 crc kubenswrapper[4760]: E0123 18:03:12.595217 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:03:12 crc kubenswrapper[4760]: E0123 18:03:12.652876 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 18:03:13 crc kubenswrapper[4760]: I0123 18:03:13.594666 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:13 crc kubenswrapper[4760]: I0123 18:03:13.594740 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:13 crc kubenswrapper[4760]: E0123 18:03:13.594852 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:03:13 crc kubenswrapper[4760]: I0123 18:03:13.594885 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:13 crc kubenswrapper[4760]: E0123 18:03:13.594988 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:03:13 crc kubenswrapper[4760]: E0123 18:03:13.595099 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:03:14 crc kubenswrapper[4760]: I0123 18:03:14.594893 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:14 crc kubenswrapper[4760]: E0123 18:03:14.595101 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:03:15 crc kubenswrapper[4760]: I0123 18:03:15.594871 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:15 crc kubenswrapper[4760]: I0123 18:03:15.594944 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:15 crc kubenswrapper[4760]: I0123 18:03:15.594899 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:15 crc kubenswrapper[4760]: E0123 18:03:15.595124 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:03:15 crc kubenswrapper[4760]: E0123 18:03:15.595224 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:03:15 crc kubenswrapper[4760]: E0123 18:03:15.595341 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:03:16 crc kubenswrapper[4760]: I0123 18:03:16.594923 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:16 crc kubenswrapper[4760]: E0123 18:03:16.595337 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:03:17 crc kubenswrapper[4760]: I0123 18:03:17.596126 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:17 crc kubenswrapper[4760]: E0123 18:03:17.596230 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:03:17 crc kubenswrapper[4760]: I0123 18:03:17.596431 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:17 crc kubenswrapper[4760]: E0123 18:03:17.596474 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:03:17 crc kubenswrapper[4760]: I0123 18:03:17.596605 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:17 crc kubenswrapper[4760]: E0123 18:03:17.596678 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw8p8" podUID="009bf3d0-1239-4b72-8a29-8b5e5964bdac" Jan 23 18:03:18 crc kubenswrapper[4760]: I0123 18:03:18.333085 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:03:18 crc kubenswrapper[4760]: I0123 18:03:18.595005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:18 crc kubenswrapper[4760]: I0123 18:03:18.597563 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 18:03:18 crc kubenswrapper[4760]: I0123 18:03:18.598739 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 18:03:19 crc kubenswrapper[4760]: I0123 18:03:19.594666 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:19 crc kubenswrapper[4760]: I0123 18:03:19.594709 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:19 crc kubenswrapper[4760]: I0123 18:03:19.594744 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:19 crc kubenswrapper[4760]: I0123 18:03:19.599484 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 18:03:19 crc kubenswrapper[4760]: I0123 18:03:19.599772 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 18:03:19 crc kubenswrapper[4760]: I0123 18:03:19.599787 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 18:03:19 crc kubenswrapper[4760]: I0123 18:03:19.600759 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.933570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.973212 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qkh24"] Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.973870 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.974660 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575"] Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.975274 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.975487 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lqr88"] Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.975872 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.976449 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-l7lg7"] Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.977229 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.977527 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rv79c"] Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.977554 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.977800 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.978124 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.979211 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.979450 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.979582 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.979663 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.979922 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.980926 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.981095 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.981192 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wv6zt"] Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.981349 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.981678 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.982560 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.982916 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.983023 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.983150 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.983293 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.983386 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.983848 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.983911 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw"] Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.984242 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.984318 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx"] Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.986617 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.986796 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.987023 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.987182 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.987574 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.991105 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq"] Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.991329 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.991532 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.991563 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.991679 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.991829 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 18:03:21 crc kubenswrapper[4760]: I0123 18:03:21.993574 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.001495 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.001612 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.001810 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.001967 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.002087 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.017463 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.043795 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.043993 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.044086 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.044174 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.044251 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x2z6h"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.044701 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.045177 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.047567 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.047662 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.047807 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.047924 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.048997 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.049125 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.049249 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.050606 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-h45nc"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.051094 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-h45nc" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.052337 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.055484 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.055598 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.055695 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.055717 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.055855 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.055978 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.056062 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.056245 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.056556 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.056970 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.061073 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.061459 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.062281 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.062891 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sd99g"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.063265 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.063297 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.064523 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.065459 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.065585 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.067766 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.067773 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49s9b"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.074752 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.075256 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.075698 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.076128 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.076448 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.083240 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.083749 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.085252 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.085427 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.085434 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.085569 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.085606 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.088634 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.088755 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.088813 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.088843 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089060 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089208 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s4mrj"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089314 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089338 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089556 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089658 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-etcd-serving-ca\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089677 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a9e97eeb-2c62-421e-b81a-875190083260-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j8t4w\" (UID: \"a9e97eeb-2c62-421e-b81a-875190083260\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089696 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa0722c-1acf-445e-8785-a8030be562b6-audit-dir\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089712 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtddx\" (UniqueName: \"kubernetes.io/projected/a9e97eeb-2c62-421e-b81a-875190083260-kube-api-access-gtddx\") pod \"openshift-config-operator-7777fb866f-j8t4w\" (UID: \"a9e97eeb-2c62-421e-b81a-875190083260\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rtbg\" (UniqueName: \"kubernetes.io/projected/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-kube-api-access-5rtbg\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089742 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-config\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089797 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9e97eeb-2c62-421e-b81a-875190083260-serving-cert\") pod \"openshift-config-operator-7777fb866f-j8t4w\" (UID: \"a9e97eeb-2c62-421e-b81a-875190083260\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089864 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvb7\" (UniqueName: \"kubernetes.io/projected/04f83f2f-9119-42d7-b712-06dc0ef0adfd-kube-api-access-gxvb7\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089923 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089930 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08fd2c2-1700-4296-a369-62c3c9928a63-config\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089947 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e359c6a3-6164-43d1-823e-829223bb3605-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qsqmf\" (UID: \"e359c6a3-6164-43d1-823e-829223bb3605\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089963 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089978 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwqxq\" (UniqueName: \"kubernetes.io/projected/b10c111b-45c0-4b24-9644-39c0ce99342d-kube-api-access-xwqxq\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-client-ca\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090008 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-config\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090009 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090025 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dba0724-514b-42cb-a0e5-9fc8caa3844f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zjqml\" (UID: \"1dba0724-514b-42cb-a0e5-9fc8caa3844f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090040 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f83f2f-9119-42d7-b712-06dc0ef0adfd-serving-cert\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090064 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27b9e2a-0800-4159-9c97-7e46c2e546c1-config\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b10c111b-45c0-4b24-9644-39c0ce99342d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090100 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b10c111b-45c0-4b24-9644-39c0ce99342d-etcd-client\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090113 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b10c111b-45c0-4b24-9644-39c0ce99342d-encryption-config\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090129 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-trusted-ca\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090143 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090159 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b10c111b-45c0-4b24-9644-39c0ce99342d-audit-policies\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090176 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvbq\" (UniqueName: \"kubernetes.io/projected/1dba0724-514b-42cb-a0e5-9fc8caa3844f-kube-api-access-9mvbq\") pod \"openshift-controller-manager-operator-756b6f6bc6-zjqml\" (UID: \"1dba0724-514b-42cb-a0e5-9fc8caa3844f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090193 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tltvv\" (UniqueName: \"kubernetes.io/projected/c08fd2c2-1700-4296-a369-62c3c9928a63-kube-api-access-tltvv\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090265 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090283 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/678b518e-f9ae-4fd0-bc08-b0489bf0aa07-metrics-tls\") pod \"dns-operator-744455d44c-lqr88\" (UID: \"678b518e-f9ae-4fd0-bc08-b0489bf0aa07\") " pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090299 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-serving-cert\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090314 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0dadab3-41d3-4f5e-bd3f-7860880717b1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090333 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0dadab3-41d3-4f5e-bd3f-7860880717b1-service-ca-bundle\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4648a70d-5af3-459c-815f-d12089d27b88-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w9575\" (UID: \"4648a70d-5af3-459c-815f-d12089d27b88\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0dadab3-41d3-4f5e-bd3f-7860880717b1-serving-cert\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090378 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e359c6a3-6164-43d1-823e-829223bb3605-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qsqmf\" (UID: \"e359c6a3-6164-43d1-823e-829223bb3605\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.089873 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pmhlc"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090539 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090571 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c08fd2c2-1700-4296-a369-62c3c9928a63-images\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090576 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hb5\" (UniqueName: \"kubernetes.io/projected/678b518e-f9ae-4fd0-bc08-b0489bf0aa07-kube-api-access-82hb5\") pod \"dns-operator-744455d44c-lqr88\" (UID: \"678b518e-f9ae-4fd0-bc08-b0489bf0aa07\") " pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090612 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e56ff0b4-3551-4388-af57-ed219cde17de-serving-cert\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090628 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/faa0722c-1acf-445e-8785-a8030be562b6-node-pullsecrets\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090641 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdhj5\" (UniqueName: \"kubernetes.io/projected/f0dadab3-41d3-4f5e-bd3f-7860880717b1-kube-api-access-qdhj5\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090658 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c8fc\" (UniqueName: \"kubernetes.io/projected/c27b9e2a-0800-4159-9c97-7e46c2e546c1-kube-api-access-8c8fc\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090673 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qsnb\" (UniqueName: \"kubernetes.io/projected/61a396e8-372f-4982-8994-d60baa42da95-kube-api-access-8qsnb\") pod \"downloads-7954f5f757-h45nc\" (UID: \"61a396e8-372f-4982-8994-d60baa42da95\") " pod="openshift-console/downloads-7954f5f757-h45nc" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-config\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090702 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/faa0722c-1acf-445e-8785-a8030be562b6-encryption-config\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090718 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c27b9e2a-0800-4159-9c97-7e46c2e546c1-machine-approver-tls\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090733 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dba0724-514b-42cb-a0e5-9fc8caa3844f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zjqml\" (UID: \"1dba0724-514b-42cb-a0e5-9fc8caa3844f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090751 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llw8m\" (UniqueName: \"kubernetes.io/projected/faa0722c-1acf-445e-8785-a8030be562b6-kube-api-access-llw8m\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090769 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dadab3-41d3-4f5e-bd3f-7860880717b1-config\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090784 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b10c111b-45c0-4b24-9644-39c0ce99342d-serving-cert\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090798 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b10c111b-45c0-4b24-9644-39c0ce99342d-audit-dir\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090813 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c08fd2c2-1700-4296-a369-62c3c9928a63-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090827 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-audit\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090843 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/faa0722c-1acf-445e-8785-a8030be562b6-etcd-client\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.090973 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091226 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c27b9e2a-0800-4159-9c97-7e46c2e546c1-auth-proxy-config\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091274 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-config\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091289 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87h7k\" (UniqueName: \"kubernetes.io/projected/4648a70d-5af3-459c-815f-d12089d27b88-kube-api-access-87h7k\") pod \"cluster-samples-operator-665b6dd947-w9575\" (UID: \"4648a70d-5af3-459c-815f-d12089d27b88\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091295 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091303 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091307 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hdwt\" (UniqueName: \"kubernetes.io/projected/e56ff0b4-3551-4388-af57-ed219cde17de-kube-api-access-7hdwt\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091357 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091394 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b10c111b-45c0-4b24-9644-39c0ce99342d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091468 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-image-import-ca\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091495 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa0722c-1acf-445e-8785-a8030be562b6-serving-cert\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091522 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-client-ca\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091545 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgnj\" (UniqueName: \"kubernetes.io/projected/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-kube-api-access-8zgnj\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091565 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e359c6a3-6164-43d1-823e-829223bb3605-config\") pod \"kube-controller-manager-operator-78b949d7b-qsqmf\" (UID: \"e359c6a3-6164-43d1-823e-829223bb3605\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.091859 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9d9kf"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.092137 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.092158 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.092336 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.092441 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.096551 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-879gc"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.097365 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.111996 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wl57f"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.113652 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.114037 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.115179 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.115741 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.115976 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.116125 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.116263 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.116418 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.116564 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.116710 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.116925 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.117055 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.117187 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.117974 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.118174 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.118368 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.118524 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.119672 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.136346 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qd549"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.136994 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.139851 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.140053 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.140176 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.140272 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.140371 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.140796 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.141276 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.141438 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.141439 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.141719 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.141851 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.141957 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.142013 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.142165 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.142438 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.142572 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.142661 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.142735 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.143059 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.143145 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.143221 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.142445 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.152478 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-h2x2h"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.153189 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.155702 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.156525 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.156890 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.157522 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-krv88"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.158327 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.161453 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.167769 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.169325 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.169924 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.170548 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.171236 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.171825 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qkh24"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.181876 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.182119 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.183340 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lqr88"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.184337 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.188565 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wv6zt"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.189813 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rv79c"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.191136 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-l7lg7"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.191647 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192507 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hdwt\" (UniqueName: \"kubernetes.io/projected/e56ff0b4-3551-4388-af57-ed219cde17de-kube-api-access-7hdwt\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192547 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b10c111b-45c0-4b24-9644-39c0ce99342d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192566 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192572 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-image-import-ca\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192823 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e359c6a3-6164-43d1-823e-829223bb3605-config\") pod \"kube-controller-manager-operator-78b949d7b-qsqmf\" (UID: \"e359c6a3-6164-43d1-823e-829223bb3605\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192847 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa0722c-1acf-445e-8785-a8030be562b6-serving-cert\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192868 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-client-ca\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192887 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgnj\" (UniqueName: \"kubernetes.io/projected/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-kube-api-access-8zgnj\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192906 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-etcd-serving-ca\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a9e97eeb-2c62-421e-b81a-875190083260-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j8t4w\" (UID: \"a9e97eeb-2c62-421e-b81a-875190083260\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192969 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxxf4\" (UniqueName: \"kubernetes.io/projected/ff160511-4992-4cb9-b103-645a1dd82f55-kube-api-access-pxxf4\") pod \"catalog-operator-68c6474976-mdqhm\" (UID: \"ff160511-4992-4cb9-b103-645a1dd82f55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.192991 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa0722c-1acf-445e-8785-a8030be562b6-audit-dir\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193013 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtddx\" (UniqueName: \"kubernetes.io/projected/a9e97eeb-2c62-421e-b81a-875190083260-kube-api-access-gtddx\") pod \"openshift-config-operator-7777fb866f-j8t4w\" (UID: \"a9e97eeb-2c62-421e-b81a-875190083260\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rtbg\" (UniqueName: \"kubernetes.io/projected/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-kube-api-access-5rtbg\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193059 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-config\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9e97eeb-2c62-421e-b81a-875190083260-serving-cert\") pod \"openshift-config-operator-7777fb866f-j8t4w\" (UID: \"a9e97eeb-2c62-421e-b81a-875190083260\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193095 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-client-ca\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193111 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxvb7\" (UniqueName: \"kubernetes.io/projected/04f83f2f-9119-42d7-b712-06dc0ef0adfd-kube-api-access-gxvb7\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193127 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08fd2c2-1700-4296-a369-62c3c9928a63-config\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e359c6a3-6164-43d1-823e-829223bb3605-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qsqmf\" (UID: \"e359c6a3-6164-43d1-823e-829223bb3605\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwqxq\" (UniqueName: \"kubernetes.io/projected/b10c111b-45c0-4b24-9644-39c0ce99342d-kube-api-access-xwqxq\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193192 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a30bb8e-9df6-4c48-8532-ad9280521fb3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jbmsp\" (UID: \"8a30bb8e-9df6-4c48-8532-ad9280521fb3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193216 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-config\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193230 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dba0724-514b-42cb-a0e5-9fc8caa3844f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zjqml\" (UID: \"1dba0724-514b-42cb-a0e5-9fc8caa3844f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193273 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x2z6h"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.193311 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa0722c-1acf-445e-8785-a8030be562b6-audit-dir\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.194017 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-etcd-serving-ca\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.194048 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b10c111b-45c0-4b24-9644-39c0ce99342d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.194059 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-49s9b\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.194662 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.194700 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08fd2c2-1700-4296-a369-62c3c9928a63-config\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.194827 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-client-ca\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.194865 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-config\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.195094 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f83f2f-9119-42d7-b712-06dc0ef0adfd-serving-cert\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.195264 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a9e97eeb-2c62-421e-b81a-875190083260-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j8t4w\" (UID: \"a9e97eeb-2c62-421e-b81a-875190083260\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.195284 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-config\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.195370 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b10c111b-45c0-4b24-9644-39c0ce99342d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.195421 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27b9e2a-0800-4159-9c97-7e46c2e546c1-config\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.195713 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b10c111b-45c0-4b24-9644-39c0ce99342d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.195852 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e359c6a3-6164-43d1-823e-829223bb3605-config\") pod \"kube-controller-manager-operator-78b949d7b-qsqmf\" (UID: \"e359c6a3-6164-43d1-823e-829223bb3605\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.195936 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-image-import-ca\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.196058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-trusted-ca\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.196074 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27b9e2a-0800-4159-9c97-7e46c2e546c1-config\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.196096 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b10c111b-45c0-4b24-9644-39c0ce99342d-etcd-client\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.196099 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.196206 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b10c111b-45c0-4b24-9644-39c0ce99342d-encryption-config\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.196905 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-trusted-ca\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.197776 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-client-ca\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203013 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203052 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b10c111b-45c0-4b24-9644-39c0ce99342d-audit-policies\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvbq\" (UniqueName: \"kubernetes.io/projected/1dba0724-514b-42cb-a0e5-9fc8caa3844f-kube-api-access-9mvbq\") pod \"openshift-controller-manager-operator-756b6f6bc6-zjqml\" (UID: \"1dba0724-514b-42cb-a0e5-9fc8caa3844f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203097 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tltvv\" (UniqueName: \"kubernetes.io/projected/c08fd2c2-1700-4296-a369-62c3c9928a63-kube-api-access-tltvv\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203116 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203136 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff160511-4992-4cb9-b103-645a1dd82f55-srv-cert\") pod \"catalog-operator-68c6474976-mdqhm\" (UID: \"ff160511-4992-4cb9-b103-645a1dd82f55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203157 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/678b518e-f9ae-4fd0-bc08-b0489bf0aa07-metrics-tls\") pod \"dns-operator-744455d44c-lqr88\" (UID: \"678b518e-f9ae-4fd0-bc08-b0489bf0aa07\") " pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-serving-cert\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0dadab3-41d3-4f5e-bd3f-7860880717b1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0dadab3-41d3-4f5e-bd3f-7860880717b1-service-ca-bundle\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203234 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203252 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4648a70d-5af3-459c-815f-d12089d27b88-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w9575\" (UID: \"4648a70d-5af3-459c-815f-d12089d27b88\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203269 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f83f2f-9119-42d7-b712-06dc0ef0adfd-serving-cert\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203269 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0dadab3-41d3-4f5e-bd3f-7860880717b1-serving-cert\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e359c6a3-6164-43d1-823e-829223bb3605-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qsqmf\" (UID: \"e359c6a3-6164-43d1-823e-829223bb3605\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203334 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff160511-4992-4cb9-b103-645a1dd82f55-profile-collector-cert\") pod \"catalog-operator-68c6474976-mdqhm\" (UID: \"ff160511-4992-4cb9-b103-645a1dd82f55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203376 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c08fd2c2-1700-4296-a369-62c3c9928a63-images\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203401 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82hb5\" (UniqueName: \"kubernetes.io/projected/678b518e-f9ae-4fd0-bc08-b0489bf0aa07-kube-api-access-82hb5\") pod \"dns-operator-744455d44c-lqr88\" (UID: \"678b518e-f9ae-4fd0-bc08-b0489bf0aa07\") " pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203436 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e56ff0b4-3551-4388-af57-ed219cde17de-serving-cert\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203451 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/faa0722c-1acf-445e-8785-a8030be562b6-node-pullsecrets\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdhj5\" (UniqueName: \"kubernetes.io/projected/f0dadab3-41d3-4f5e-bd3f-7860880717b1-kube-api-access-qdhj5\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203489 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc9j2\" (UniqueName: \"kubernetes.io/projected/8a30bb8e-9df6-4c48-8532-ad9280521fb3-kube-api-access-rc9j2\") pod \"package-server-manager-789f6589d5-jbmsp\" (UID: \"8a30bb8e-9df6-4c48-8532-ad9280521fb3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203506 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/faa0722c-1acf-445e-8785-a8030be562b6-encryption-config\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203524 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c8fc\" (UniqueName: \"kubernetes.io/projected/c27b9e2a-0800-4159-9c97-7e46c2e546c1-kube-api-access-8c8fc\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203541 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-49s9b\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qsnb\" (UniqueName: \"kubernetes.io/projected/61a396e8-372f-4982-8994-d60baa42da95-kube-api-access-8qsnb\") pod \"downloads-7954f5f757-h45nc\" (UID: \"61a396e8-372f-4982-8994-d60baa42da95\") " pod="openshift-console/downloads-7954f5f757-h45nc" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203577 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-config\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203596 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dba0724-514b-42cb-a0e5-9fc8caa3844f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zjqml\" (UID: \"1dba0724-514b-42cb-a0e5-9fc8caa3844f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203611 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c27b9e2a-0800-4159-9c97-7e46c2e546c1-machine-approver-tls\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203626 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bjt\" (UniqueName: \"kubernetes.io/projected/7106b645-deaf-47b1-9d00-5050fdd7b040-kube-api-access-c9bjt\") pod \"marketplace-operator-79b997595-49s9b\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203648 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llw8m\" (UniqueName: \"kubernetes.io/projected/faa0722c-1acf-445e-8785-a8030be562b6-kube-api-access-llw8m\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203665 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dadab3-41d3-4f5e-bd3f-7860880717b1-config\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b10c111b-45c0-4b24-9644-39c0ce99342d-serving-cert\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203714 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b10c111b-45c0-4b24-9644-39c0ce99342d-audit-dir\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203733 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c08fd2c2-1700-4296-a369-62c3c9928a63-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-audit\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/faa0722c-1acf-445e-8785-a8030be562b6-etcd-client\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203792 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c27b9e2a-0800-4159-9c97-7e46c2e546c1-auth-proxy-config\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203814 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203829 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87h7k\" (UniqueName: \"kubernetes.io/projected/4648a70d-5af3-459c-815f-d12089d27b88-kube-api-access-87h7k\") pod \"cluster-samples-operator-665b6dd947-w9575\" (UID: \"4648a70d-5af3-459c-815f-d12089d27b88\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203848 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-config\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.204246 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c08fd2c2-1700-4296-a369-62c3c9928a63-images\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.204954 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e359c6a3-6164-43d1-823e-829223bb3605-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qsqmf\" (UID: \"e359c6a3-6164-43d1-823e-829223bb3605\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.204985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-config\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.205105 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0dadab3-41d3-4f5e-bd3f-7860880717b1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.205459 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-h45nc"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.205501 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.206066 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.206386 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/faa0722c-1acf-445e-8785-a8030be562b6-node-pullsecrets\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.206637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b10c111b-45c0-4b24-9644-39c0ce99342d-audit-policies\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.206672 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0dadab3-41d3-4f5e-bd3f-7860880717b1-service-ca-bundle\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.207109 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b10c111b-45c0-4b24-9644-39c0ce99342d-etcd-client\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.207122 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa0722c-1acf-445e-8785-a8030be562b6-serving-cert\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.207114 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dadab3-41d3-4f5e-bd3f-7860880717b1-config\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.207184 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kt5pm"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.207573 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.207598 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-serving-cert\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.207649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-config\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.207914 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.208094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e56ff0b4-3551-4388-af57-ed219cde17de-serving-cert\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.208108 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dba0724-514b-42cb-a0e5-9fc8caa3844f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zjqml\" (UID: \"1dba0724-514b-42cb-a0e5-9fc8caa3844f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.208630 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c27b9e2a-0800-4159-9c97-7e46c2e546c1-auth-proxy-config\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.208765 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/faa0722c-1acf-445e-8785-a8030be562b6-audit\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.208820 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9e97eeb-2c62-421e-b81a-875190083260-serving-cert\") pod \"openshift-config-operator-7777fb866f-j8t4w\" (UID: \"a9e97eeb-2c62-421e-b81a-875190083260\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.208829 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b10c111b-45c0-4b24-9644-39c0ce99342d-audit-dir\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.203660 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dba0724-514b-42cb-a0e5-9fc8caa3844f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zjqml\" (UID: \"1dba0724-514b-42cb-a0e5-9fc8caa3844f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.209094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b10c111b-45c0-4b24-9644-39c0ce99342d-encryption-config\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.209385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.210183 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-526r6"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.210825 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-526r6" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.210913 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/678b518e-f9ae-4fd0-bc08-b0489bf0aa07-metrics-tls\") pod \"dns-operator-744455d44c-lqr88\" (UID: \"678b518e-f9ae-4fd0-bc08-b0489bf0aa07\") " pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.211107 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/faa0722c-1acf-445e-8785-a8030be562b6-encryption-config\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.212884 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.213366 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0dadab3-41d3-4f5e-bd3f-7860880717b1-serving-cert\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.215685 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.215865 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.216659 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.222045 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c27b9e2a-0800-4159-9c97-7e46c2e546c1-machine-approver-tls\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.222197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4648a70d-5af3-459c-815f-d12089d27b88-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w9575\" (UID: \"4648a70d-5af3-459c-815f-d12089d27b88\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.224069 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c08fd2c2-1700-4296-a369-62c3c9928a63-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.224293 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s4mrj"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.224844 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b10c111b-45c0-4b24-9644-39c0ce99342d-serving-cert\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.231822 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.231881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.231897 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.234310 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.234822 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/faa0722c-1acf-445e-8785-a8030be562b6-etcd-client\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.235752 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.236292 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.237242 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.239636 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.240717 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-krv88"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.244083 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b9mm2"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.246770 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-6srzg"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.247010 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.247272 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.247297 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49s9b"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.247308 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pmhlc"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.247319 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.247394 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.247879 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-879gc"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.248899 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.249907 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9d9kf"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.250873 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sd99g"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.251934 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wl57f"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.252808 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.253830 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h2x2h"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.254915 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.255535 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.255869 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-526r6"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.256868 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b9mm2"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.257854 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.258838 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.259809 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.260770 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6srzg"] Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.275717 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.295713 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.304455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxxf4\" (UniqueName: \"kubernetes.io/projected/ff160511-4992-4cb9-b103-645a1dd82f55-kube-api-access-pxxf4\") pod \"catalog-operator-68c6474976-mdqhm\" (UID: \"ff160511-4992-4cb9-b103-645a1dd82f55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.304528 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a30bb8e-9df6-4c48-8532-ad9280521fb3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jbmsp\" (UID: \"8a30bb8e-9df6-4c48-8532-ad9280521fb3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.304562 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-49s9b\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.304633 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff160511-4992-4cb9-b103-645a1dd82f55-srv-cert\") pod \"catalog-operator-68c6474976-mdqhm\" (UID: \"ff160511-4992-4cb9-b103-645a1dd82f55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.304680 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff160511-4992-4cb9-b103-645a1dd82f55-profile-collector-cert\") pod \"catalog-operator-68c6474976-mdqhm\" (UID: \"ff160511-4992-4cb9-b103-645a1dd82f55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.304743 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc9j2\" (UniqueName: \"kubernetes.io/projected/8a30bb8e-9df6-4c48-8532-ad9280521fb3-kube-api-access-rc9j2\") pod \"package-server-manager-789f6589d5-jbmsp\" (UID: \"8a30bb8e-9df6-4c48-8532-ad9280521fb3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.304786 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-49s9b\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.304827 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bjt\" (UniqueName: \"kubernetes.io/projected/7106b645-deaf-47b1-9d00-5050fdd7b040-kube-api-access-c9bjt\") pod \"marketplace-operator-79b997595-49s9b\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.306180 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-49s9b\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.307999 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a30bb8e-9df6-4c48-8532-ad9280521fb3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jbmsp\" (UID: \"8a30bb8e-9df6-4c48-8532-ad9280521fb3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.308212 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-49s9b\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.317479 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.335717 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.366250 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.382208 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.397057 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.416568 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.435904 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.455288 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.476839 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.496264 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.527435 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.535715 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.555746 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.576756 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.595750 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.617487 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.637497 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.656107 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.677375 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.697722 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.716897 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.737094 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.755948 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.776163 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.796019 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.816310 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.836600 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.856617 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.876396 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.896203 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.916905 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.936861 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.957310 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.976238 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 18:03:22 crc kubenswrapper[4760]: I0123 18:03:22.997717 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.016533 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.029292 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff160511-4992-4cb9-b103-645a1dd82f55-srv-cert\") pod \"catalog-operator-68c6474976-mdqhm\" (UID: \"ff160511-4992-4cb9-b103-645a1dd82f55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.037064 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.057290 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.069976 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff160511-4992-4cb9-b103-645a1dd82f55-profile-collector-cert\") pod \"catalog-operator-68c6474976-mdqhm\" (UID: \"ff160511-4992-4cb9-b103-645a1dd82f55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.076606 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.096873 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.116014 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.134885 4760 request.go:700] Waited for 1.019492711s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.136591 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.156341 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.196928 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.216938 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.237148 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.255903 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.277304 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.296424 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.318187 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.342351 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.356465 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.376538 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.396129 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.416529 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.417817 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:23 crc kubenswrapper[4760]: E0123 18:03:23.418019 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:05:25.417993876 +0000 UTC m=+268.420451979 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.418125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.418165 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.418224 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.418252 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.419660 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.421951 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.422988 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.423293 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.436988 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.456344 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.477492 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.496730 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.515142 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.517355 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.529638 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.557814 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.577844 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.596572 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.617128 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.637181 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.658451 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.676455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.699310 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.713436 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.736671 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hdwt\" (UniqueName: \"kubernetes.io/projected/e56ff0b4-3551-4388-af57-ed219cde17de-kube-api-access-7hdwt\") pod \"controller-manager-879f6c89f-rv79c\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:23 crc kubenswrapper[4760]: W0123 18:03:23.742129 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-5ec8057ab675126a2ea43f259994efcf216eaa8746e3321ab995a9bff0e2786b WatchSource:0}: Error finding container 5ec8057ab675126a2ea43f259994efcf216eaa8746e3321ab995a9bff0e2786b: Status 404 returned error can't find the container with id 5ec8057ab675126a2ea43f259994efcf216eaa8746e3321ab995a9bff0e2786b Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.756862 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.759665 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxvb7\" (UniqueName: \"kubernetes.io/projected/04f83f2f-9119-42d7-b712-06dc0ef0adfd-kube-api-access-gxvb7\") pod \"route-controller-manager-6576b87f9c-dxppx\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:23 crc kubenswrapper[4760]: W0123 18:03:23.765402 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-306361f6e2eaca6681def238c8f629eb9c349d9f2efe374a066f34920ef89c5e WatchSource:0}: Error finding container 306361f6e2eaca6681def238c8f629eb9c349d9f2efe374a066f34920ef89c5e: Status 404 returned error can't find the container with id 306361f6e2eaca6681def238c8f629eb9c349d9f2efe374a066f34920ef89c5e Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.775722 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.796369 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.831183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgnj\" (UniqueName: \"kubernetes.io/projected/c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916-kube-api-access-8zgnj\") pod \"console-operator-58897d9998-wv6zt\" (UID: \"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916\") " pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.848472 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwqxq\" (UniqueName: \"kubernetes.io/projected/b10c111b-45c0-4b24-9644-39c0ce99342d-kube-api-access-xwqxq\") pod \"apiserver-7bbb656c7d-chh57\" (UID: \"b10c111b-45c0-4b24-9644-39c0ce99342d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.873132 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtddx\" (UniqueName: \"kubernetes.io/projected/a9e97eeb-2c62-421e-b81a-875190083260-kube-api-access-gtddx\") pod \"openshift-config-operator-7777fb866f-j8t4w\" (UID: \"a9e97eeb-2c62-421e-b81a-875190083260\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:23 crc kubenswrapper[4760]: W0123 18:03:23.878056 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d6659fea520821481030961d971027399a517c1c0e54cc7f9851860119353add WatchSource:0}: Error finding container d6659fea520821481030961d971027399a517c1c0e54cc7f9851860119353add: Status 404 returned error can't find the container with id d6659fea520821481030961d971027399a517c1c0e54cc7f9851860119353add Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.889449 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rtbg\" (UniqueName: \"kubernetes.io/projected/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-kube-api-access-5rtbg\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.914237 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82hb5\" (UniqueName: \"kubernetes.io/projected/678b518e-f9ae-4fd0-bc08-b0489bf0aa07-kube-api-access-82hb5\") pod \"dns-operator-744455d44c-lqr88\" (UID: \"678b518e-f9ae-4fd0-bc08-b0489bf0aa07\") " pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.916111 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.929755 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tltvv\" (UniqueName: \"kubernetes.io/projected/c08fd2c2-1700-4296-a369-62c3c9928a63-kube-api-access-tltvv\") pod \"machine-api-operator-5694c8668f-qkh24\" (UID: \"c08fd2c2-1700-4296-a369-62c3c9928a63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.946772 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.951127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvbq\" (UniqueName: \"kubernetes.io/projected/1dba0724-514b-42cb-a0e5-9fc8caa3844f-kube-api-access-9mvbq\") pod \"openshift-controller-manager-operator-756b6f6bc6-zjqml\" (UID: \"1dba0724-514b-42cb-a0e5-9fc8caa3844f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.980696 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdhj5\" (UniqueName: \"kubernetes.io/projected/f0dadab3-41d3-4f5e-bd3f-7860880717b1-kube-api-access-qdhj5\") pod \"authentication-operator-69f744f599-x2z6h\" (UID: \"f0dadab3-41d3-4f5e-bd3f-7860880717b1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.982566 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:23 crc kubenswrapper[4760]: I0123 18:03:23.999117 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c8fc\" (UniqueName: \"kubernetes.io/projected/c27b9e2a-0800-4159-9c97-7e46c2e546c1-kube-api-access-8c8fc\") pod \"machine-approver-56656f9798-l7btq\" (UID: \"c27b9e2a-0800-4159-9c97-7e46c2e546c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.012995 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qsnb\" (UniqueName: \"kubernetes.io/projected/61a396e8-372f-4982-8994-d60baa42da95-kube-api-access-8qsnb\") pod \"downloads-7954f5f757-h45nc\" (UID: \"61a396e8-372f-4982-8994-d60baa42da95\") " pod="openshift-console/downloads-7954f5f757-h45nc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.032076 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llw8m\" (UniqueName: \"kubernetes.io/projected/faa0722c-1acf-445e-8785-a8030be562b6-kube-api-access-llw8m\") pod \"apiserver-76f77b778f-l7lg7\" (UID: \"faa0722c-1acf-445e-8785-a8030be562b6\") " pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.036484 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.050614 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.057819 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.058067 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.065254 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.067934 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rv79c"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.073732 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-h45nc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.080461 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.092062 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e359c6a3-6164-43d1-823e-829223bb3605-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qsqmf\" (UID: \"e359c6a3-6164-43d1-823e-829223bb3605\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.096753 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.098225 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.122776 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wv6zt"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.137479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5ec118f-1c98-4ea4-870a-7ced4b2303e5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hsfsw\" (UID: \"c5ec118f-1c98-4ea4-870a-7ced4b2303e5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.153595 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.154683 4760 request.go:700] Waited for 1.943646771s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.155818 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.156694 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.158015 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87h7k\" (UniqueName: \"kubernetes.io/projected/4648a70d-5af3-459c-815f-d12089d27b88-kube-api-access-87h7k\") pod \"cluster-samples-operator-665b6dd947-w9575\" (UID: \"4648a70d-5af3-459c-815f-d12089d27b88\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.176636 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.178837 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.188762 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.196322 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.215301 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.238491 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.258030 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.275920 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.278090 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.293233 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.296025 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.316427 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.328940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5874a1dfde6b120f832faddfb4b6f8a8a2b8e84104357dcf60b536d8bf8d6c3e"} Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.328983 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5ec8057ab675126a2ea43f259994efcf216eaa8746e3321ab995a9bff0e2786b"} Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.329482 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.336741 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.347522 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" event={"ID":"04f83f2f-9119-42d7-b712-06dc0ef0adfd","Type":"ContainerStarted","Data":"a01315f657219758f634d84af76766c82169c303598865db8a89d76fa9fab531"} Jan 23 18:03:24 crc kubenswrapper[4760]: W0123 18:03:24.348397 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc27b9e2a_0800_4159_9c97_7e46c2e546c1.slice/crio-ad35214b65eccf9d11d7f28df9f475e9a750792590544a2234508be3c4e19dd6 WatchSource:0}: Error finding container ad35214b65eccf9d11d7f28df9f475e9a750792590544a2234508be3c4e19dd6: Status 404 returned error can't find the container with id ad35214b65eccf9d11d7f28df9f475e9a750792590544a2234508be3c4e19dd6 Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.361448 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" event={"ID":"e56ff0b4-3551-4388-af57-ed219cde17de","Type":"ContainerStarted","Data":"e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60"} Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.361493 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" event={"ID":"e56ff0b4-3551-4388-af57-ed219cde17de","Type":"ContainerStarted","Data":"028e37a2c6ef018d2a7b33375a6bf274d64925e4254baefc76e6ae5a177b0696"} Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.362074 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.370050 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0ca683558dd739a8699e75b669f0ec403020836f605eb219e24febf06fd19df8"} Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.370095 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d6659fea520821481030961d971027399a517c1c0e54cc7f9851860119353add"} Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.376853 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc9j2\" (UniqueName: \"kubernetes.io/projected/8a30bb8e-9df6-4c48-8532-ad9280521fb3-kube-api-access-rc9j2\") pod \"package-server-manager-789f6589d5-jbmsp\" (UID: \"8a30bb8e-9df6-4c48-8532-ad9280521fb3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.379678 4760 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rv79c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.379721 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" podUID="e56ff0b4-3551-4388-af57-ed219cde17de" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.381839 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2f5bc9b0f70bd10f04c8fc70aff8d9ccca62cb026c12a9bd8d8172988c20cd8f"} Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.381869 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"306361f6e2eaca6681def238c8f629eb9c349d9f2efe374a066f34920ef89c5e"} Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.387747 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wv6zt" event={"ID":"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916","Type":"ContainerStarted","Data":"61ff654ce0f11ee1c0e977881afa4a59c0d35ac5dd7fb07654a9808768a7f254"} Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.387904 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.394552 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.394702 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.399195 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bjt\" (UniqueName: \"kubernetes.io/projected/7106b645-deaf-47b1-9d00-5050fdd7b040-kube-api-access-c9bjt\") pod \"marketplace-operator-79b997595-49s9b\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.416056 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxxf4\" (UniqueName: \"kubernetes.io/projected/ff160511-4992-4cb9-b103-645a1dd82f55-kube-api-access-pxxf4\") pod \"catalog-operator-68c6474976-mdqhm\" (UID: \"ff160511-4992-4cb9-b103-645a1dd82f55\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.427870 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.444301 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08dd246-4139-4911-a4f8-4a5e693f12de-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ffzzt\" (UID: \"f08dd246-4139-4911-a4f8-4a5e693f12de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448477 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1747fa8-d09e-4415-ab0e-e607a674dfbb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448493 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/33251e93-ea7b-4889-9822-f149f0331138-etcd-client\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448508 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08dd246-4139-4911-a4f8-4a5e693f12de-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ffzzt\" (UID: \"f08dd246-4139-4911-a4f8-4a5e693f12de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1747fa8-d09e-4415-ab0e-e607a674dfbb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448564 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-bound-sa-token\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448581 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-trusted-ca\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448596 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/189fd44f-cc62-480c-9c49-377810883c89-apiservice-cert\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448610 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8vn\" (UniqueName: \"kubernetes.io/projected/987bbc21-84aa-4e45-bb94-b0639da3c5c8-kube-api-access-gl8vn\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448627 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9612c6f-0d03-4f5c-99d9-1e6681cea174-proxy-tls\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448649 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729e3c99-f28e-446a-af8d-d6f6225fadba-config\") pod \"service-ca-operator-777779d784-5v4zz\" (UID: \"729e3c99-f28e-446a-af8d-d6f6225fadba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448672 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f32934d-e69b-4b98-b6db-447125e38ae0-config\") pod \"kube-apiserver-operator-766d6c64bb-qj7w6\" (UID: \"6f32934d-e69b-4b98-b6db-447125e38ae0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448689 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448711 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-tls\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448726 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/33251e93-ea7b-4889-9822-f149f0331138-etcd-ca\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-trusted-ca-bundle\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448762 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/987bbc21-84aa-4e45-bb94-b0639da3c5c8-service-ca-bundle\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thqw2\" (UniqueName: \"kubernetes.io/projected/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-kube-api-access-thqw2\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448791 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-config\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fm8\" (UniqueName: \"kubernetes.io/projected/2955aa37-6387-4353-ba15-ee92c902d318-kube-api-access-k6fm8\") pod \"olm-operator-6b444d44fb-5b48b\" (UID: \"2955aa37-6387-4353-ba15-ee92c902d318\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5068e6c-7f43-4692-b8df-968ca0907a62-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wl57f\" (UID: \"d5068e6c-7f43-4692-b8df-968ca0907a62\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448835 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f32934d-e69b-4b98-b6db-447125e38ae0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qj7w6\" (UID: \"6f32934d-e69b-4b98-b6db-447125e38ae0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.448885 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449345 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449382 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2955aa37-6387-4353-ba15-ee92c902d318-srv-cert\") pod \"olm-operator-6b444d44fb-5b48b\" (UID: \"2955aa37-6387-4353-ba15-ee92c902d318\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449589 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-service-ca\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449631 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33251e93-ea7b-4889-9822-f149f0331138-serving-cert\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c78fffb-6965-4ef7-b534-d713a7dd3318-proxy-tls\") pod \"machine-config-controller-84d6567774-879gc\" (UID: \"7c78fffb-6965-4ef7-b534-d713a7dd3318\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/987bbc21-84aa-4e45-bb94-b0639da3c5c8-default-certificate\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449696 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-policies\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449734 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/189fd44f-cc62-480c-9c49-377810883c89-webhook-cert\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449765 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7qfs\" (UniqueName: \"kubernetes.io/projected/f86c7176-88c1-4cd6-ab81-af7df8e9923f-kube-api-access-m7qfs\") pod \"service-ca-9c57cc56f-sd99g\" (UID: \"f86c7176-88c1-4cd6-ab81-af7df8e9923f\") " pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449782 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/33251e93-ea7b-4889-9822-f149f0331138-etcd-service-ca\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449835 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwqcf\" (UniqueName: \"kubernetes.io/projected/33251e93-ea7b-4889-9822-f149f0331138-kube-api-access-kwqcf\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-trusted-ca\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449869 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/82314a42-a08b-4561-b24a-71e715d5d37f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ffvsn\" (UID: \"82314a42-a08b-4561-b24a-71e715d5d37f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449932 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmclt\" (UniqueName: \"kubernetes.io/projected/f08dd246-4139-4911-a4f8-4a5e693f12de-kube-api-access-vmclt\") pod \"kube-storage-version-migrator-operator-b67b599dd-ffzzt\" (UID: \"f08dd246-4139-4911-a4f8-4a5e693f12de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449967 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.449991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f86c7176-88c1-4cd6-ab81-af7df8e9923f-signing-cabundle\") pod \"service-ca-9c57cc56f-sd99g\" (UID: \"f86c7176-88c1-4cd6-ab81-af7df8e9923f\") " pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450014 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f86c7176-88c1-4cd6-ab81-af7df8e9923f-signing-key\") pod \"service-ca-9c57cc56f-sd99g\" (UID: \"f86c7176-88c1-4cd6-ab81-af7df8e9923f\") " pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450028 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9612c6f-0d03-4f5c-99d9-1e6681cea174-images\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c78fffb-6965-4ef7-b534-d713a7dd3318-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-879gc\" (UID: \"7c78fffb-6965-4ef7-b534-d713a7dd3318\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450083 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9612c6f-0d03-4f5c-99d9-1e6681cea174-auth-proxy-config\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450098 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjs4r\" (UniqueName: \"kubernetes.io/projected/729e3c99-f28e-446a-af8d-d6f6225fadba-kube-api-access-tjs4r\") pod \"service-ca-operator-777779d784-5v4zz\" (UID: \"729e3c99-f28e-446a-af8d-d6f6225fadba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450112 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-oauth-serving-cert\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450126 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450157 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-oauth-config\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450186 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-certificates\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450206 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/987bbc21-84aa-4e45-bb94-b0639da3c5c8-stats-auth\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450235 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjc7\" (UniqueName: \"kubernetes.io/projected/82314a42-a08b-4561-b24a-71e715d5d37f-kube-api-access-4kjc7\") pod \"control-plane-machine-set-operator-78cbb6b69f-ffvsn\" (UID: \"82314a42-a08b-4561-b24a-71e715d5d37f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450283 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450301 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-serving-cert\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450321 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jmq\" (UniqueName: \"kubernetes.io/projected/7f3d065c-0f4b-418d-8f05-a147000a9113-kube-api-access-45jmq\") pod \"migrator-59844c95c7-zbz2j\" (UID: \"7f3d065c-0f4b-418d-8f05-a147000a9113\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450339 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/729e3c99-f28e-446a-af8d-d6f6225fadba-serving-cert\") pod \"service-ca-operator-777779d784-5v4zz\" (UID: \"729e3c99-f28e-446a-af8d-d6f6225fadba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450378 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsn8n\" (UniqueName: \"kubernetes.io/projected/d3f94f74-4a2c-419a-b73f-c654dbf783b5-kube-api-access-nsn8n\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450399 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450471 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtxd4\" (UniqueName: \"kubernetes.io/projected/189fd44f-cc62-480c-9c49-377810883c89-kube-api-access-rtxd4\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450489 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vgmz\" (UniqueName: \"kubernetes.io/projected/b9612c6f-0d03-4f5c-99d9-1e6681cea174-kube-api-access-5vgmz\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbxbc\" (UniqueName: \"kubernetes.io/projected/37543438-dc29-44d6-a46e-8864aa3fcad4-kube-api-access-qbxbc\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33251e93-ea7b-4889-9822-f149f0331138-config\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450555 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98nph\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-kube-api-access-98nph\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450572 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2955aa37-6387-4353-ba15-ee92c902d318-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5b48b\" (UID: \"2955aa37-6387-4353-ba15-ee92c902d318\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450604 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/987bbc21-84aa-4e45-bb94-b0639da3c5c8-metrics-certs\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450619 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwmf6\" (UniqueName: \"kubernetes.io/projected/d5068e6c-7f43-4692-b8df-968ca0907a62-kube-api-access-gwmf6\") pod \"multus-admission-controller-857f4d67dd-wl57f\" (UID: \"d5068e6c-7f43-4692-b8df-968ca0907a62\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/189fd44f-cc62-480c-9c49-377810883c89-tmpfs\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450719 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-dir\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450737 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450772 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthbs\" (UniqueName: \"kubernetes.io/projected/7c78fffb-6965-4ef7-b534-d713a7dd3318-kube-api-access-wthbs\") pod \"machine-config-controller-84d6567774-879gc\" (UID: \"7c78fffb-6965-4ef7-b534-d713a7dd3318\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450812 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f32934d-e69b-4b98-b6db-447125e38ae0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qj7w6\" (UID: \"6f32934d-e69b-4b98-b6db-447125e38ae0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.450828 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-metrics-tls\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: E0123 18:03:24.458024 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:24.958006114 +0000 UTC m=+147.960464037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.501845 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x2z6h"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f86c7176-88c1-4cd6-ab81-af7df8e9923f-signing-cabundle\") pod \"service-ca-9c57cc56f-sd99g\" (UID: \"f86c7176-88c1-4cd6-ab81-af7df8e9923f\") " pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600567 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f86c7176-88c1-4cd6-ab81-af7df8e9923f-signing-key\") pod \"service-ca-9c57cc56f-sd99g\" (UID: \"f86c7176-88c1-4cd6-ab81-af7df8e9923f\") " pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600585 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9612c6f-0d03-4f5c-99d9-1e6681cea174-images\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600605 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c78fffb-6965-4ef7-b534-d713a7dd3318-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-879gc\" (UID: \"7c78fffb-6965-4ef7-b534-d713a7dd3318\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600623 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9612c6f-0d03-4f5c-99d9-1e6681cea174-auth-proxy-config\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600642 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjs4r\" (UniqueName: \"kubernetes.io/projected/729e3c99-f28e-446a-af8d-d6f6225fadba-kube-api-access-tjs4r\") pod \"service-ca-operator-777779d784-5v4zz\" (UID: \"729e3c99-f28e-446a-af8d-d6f6225fadba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-oauth-serving-cert\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600683 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600707 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-oauth-config\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-certificates\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600749 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/987bbc21-84aa-4e45-bb94-b0639da3c5c8-stats-auth\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600778 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kjc7\" (UniqueName: \"kubernetes.io/projected/82314a42-a08b-4561-b24a-71e715d5d37f-kube-api-access-4kjc7\") pod \"control-plane-machine-set-operator-78cbb6b69f-ffvsn\" (UID: \"82314a42-a08b-4561-b24a-71e715d5d37f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600798 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600817 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-serving-cert\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600838 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-plugins-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600860 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jmq\" (UniqueName: \"kubernetes.io/projected/7f3d065c-0f4b-418d-8f05-a147000a9113-kube-api-access-45jmq\") pod \"migrator-59844c95c7-zbz2j\" (UID: \"7f3d065c-0f4b-418d-8f05-a147000a9113\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600884 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/729e3c99-f28e-446a-af8d-d6f6225fadba-serving-cert\") pod \"service-ca-operator-777779d784-5v4zz\" (UID: \"729e3c99-f28e-446a-af8d-d6f6225fadba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600906 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsn8n\" (UniqueName: \"kubernetes.io/projected/d3f94f74-4a2c-419a-b73f-c654dbf783b5-kube-api-access-nsn8n\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600926 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff72bd3-e963-4c1b-8187-dbff951b1f8d-config-volume\") pod \"collect-profiles-29486520-pcprg\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600948 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.600993 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtxd4\" (UniqueName: \"kubernetes.io/projected/189fd44f-cc62-480c-9c49-377810883c89-kube-api-access-rtxd4\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vgmz\" (UniqueName: \"kubernetes.io/projected/b9612c6f-0d03-4f5c-99d9-1e6681cea174-kube-api-access-5vgmz\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601034 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbxbc\" (UniqueName: \"kubernetes.io/projected/37543438-dc29-44d6-a46e-8864aa3fcad4-kube-api-access-qbxbc\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601055 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33251e93-ea7b-4889-9822-f149f0331138-config\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98nph\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-kube-api-access-98nph\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601105 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2955aa37-6387-4353-ba15-ee92c902d318-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5b48b\" (UID: \"2955aa37-6387-4353-ba15-ee92c902d318\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601126 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf8fb\" (UniqueName: \"kubernetes.io/projected/aff72bd3-e963-4c1b-8187-dbff951b1f8d-kube-api-access-kf8fb\") pod \"collect-profiles-29486520-pcprg\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601153 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/987bbc21-84aa-4e45-bb94-b0639da3c5c8-metrics-certs\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601174 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x4lj\" (UniqueName: \"kubernetes.io/projected/096ce378-9dd6-4c27-a30b-1c619d486ccb-kube-api-access-5x4lj\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601193 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-csi-data-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601221 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37b6e215-7d18-4578-8e22-90068a8dabf6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdqvd\" (UID: \"37b6e215-7d18-4578-8e22-90068a8dabf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601242 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwmf6\" (UniqueName: \"kubernetes.io/projected/d5068e6c-7f43-4692-b8df-968ca0907a62-kube-api-access-gwmf6\") pod \"multus-admission-controller-857f4d67dd-wl57f\" (UID: \"d5068e6c-7f43-4692-b8df-968ca0907a62\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601265 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/189fd44f-cc62-480c-9c49-377810883c89-tmpfs\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601285 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-dir\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601304 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-mountpoint-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601325 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rx8\" (UniqueName: \"kubernetes.io/projected/b570e412-35e6-4feb-a5b4-1a198b486a39-kube-api-access-w9rx8\") pod \"ingress-canary-526r6\" (UID: \"b570e412-35e6-4feb-a5b4-1a198b486a39\") " pod="openshift-ingress-canary/ingress-canary-526r6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601360 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601382 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411040ce-40d3-4347-89d0-84f1a51bc0e7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jq7dg\" (UID: \"411040ce-40d3-4347-89d0-84f1a51bc0e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601422 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wthbs\" (UniqueName: \"kubernetes.io/projected/7c78fffb-6965-4ef7-b534-d713a7dd3318-kube-api-access-wthbs\") pod \"machine-config-controller-84d6567774-879gc\" (UID: \"7c78fffb-6965-4ef7-b534-d713a7dd3318\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601446 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f32934d-e69b-4b98-b6db-447125e38ae0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qj7w6\" (UID: \"6f32934d-e69b-4b98-b6db-447125e38ae0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601499 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-metrics-tls\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b6e215-7d18-4578-8e22-90068a8dabf6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdqvd\" (UID: \"37b6e215-7d18-4578-8e22-90068a8dabf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601556 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-socket-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08dd246-4139-4911-a4f8-4a5e693f12de-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ffzzt\" (UID: \"f08dd246-4139-4911-a4f8-4a5e693f12de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601607 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1747fa8-d09e-4415-ab0e-e607a674dfbb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601641 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/33251e93-ea7b-4889-9822-f149f0331138-etcd-client\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601664 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b570e412-35e6-4feb-a5b4-1a198b486a39-cert\") pod \"ingress-canary-526r6\" (UID: \"b570e412-35e6-4feb-a5b4-1a198b486a39\") " pod="openshift-ingress-canary/ingress-canary-526r6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601684 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08dd246-4139-4911-a4f8-4a5e693f12de-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ffzzt\" (UID: \"f08dd246-4139-4911-a4f8-4a5e693f12de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1747fa8-d09e-4415-ab0e-e607a674dfbb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601741 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aff72bd3-e963-4c1b-8187-dbff951b1f8d-secret-volume\") pod \"collect-profiles-29486520-pcprg\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-bound-sa-token\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-trusted-ca\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601803 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/189fd44f-cc62-480c-9c49-377810883c89-apiservice-cert\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601823 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl8vn\" (UniqueName: \"kubernetes.io/projected/987bbc21-84aa-4e45-bb94-b0639da3c5c8-kube-api-access-gl8vn\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601842 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9612c6f-0d03-4f5c-99d9-1e6681cea174-proxy-tls\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601863 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729e3c99-f28e-446a-af8d-d6f6225fadba-config\") pod \"service-ca-operator-777779d784-5v4zz\" (UID: \"729e3c99-f28e-446a-af8d-d6f6225fadba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601884 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4fc7dcb9-4d58-4912-b845-82e987928932-certs\") pod \"machine-config-server-kt5pm\" (UID: \"4fc7dcb9-4d58-4912-b845-82e987928932\") " pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601903 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqt22\" (UniqueName: \"kubernetes.io/projected/411040ce-40d3-4347-89d0-84f1a51bc0e7-kube-api-access-tqt22\") pod \"openshift-apiserver-operator-796bbdcf4f-jq7dg\" (UID: \"411040ce-40d3-4347-89d0-84f1a51bc0e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f32934d-e69b-4b98-b6db-447125e38ae0-config\") pod \"kube-apiserver-operator-766d6c64bb-qj7w6\" (UID: \"6f32934d-e69b-4b98-b6db-447125e38ae0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601944 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-tls\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.601983 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/33251e93-ea7b-4889-9822-f149f0331138-etcd-ca\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602003 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b6e215-7d18-4578-8e22-90068a8dabf6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdqvd\" (UID: \"37b6e215-7d18-4578-8e22-90068a8dabf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602026 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-trusted-ca-bundle\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6df9414-4e8b-4593-8401-1ba799201143-config-volume\") pod \"dns-default-6srzg\" (UID: \"f6df9414-4e8b-4593-8401-1ba799201143\") " pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602063 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/411040ce-40d3-4347-89d0-84f1a51bc0e7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jq7dg\" (UID: \"411040ce-40d3-4347-89d0-84f1a51bc0e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/987bbc21-84aa-4e45-bb94-b0639da3c5c8-service-ca-bundle\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602104 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thqw2\" (UniqueName: \"kubernetes.io/projected/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-kube-api-access-thqw2\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602123 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-config\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602141 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6fm8\" (UniqueName: \"kubernetes.io/projected/2955aa37-6387-4353-ba15-ee92c902d318-kube-api-access-k6fm8\") pod \"olm-operator-6b444d44fb-5b48b\" (UID: \"2955aa37-6387-4353-ba15-ee92c902d318\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602163 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5068e6c-7f43-4692-b8df-968ca0907a62-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wl57f\" (UID: \"d5068e6c-7f43-4692-b8df-968ca0907a62\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58jlm\" (UniqueName: \"kubernetes.io/projected/f6df9414-4e8b-4593-8401-1ba799201143-kube-api-access-58jlm\") pod \"dns-default-6srzg\" (UID: \"f6df9414-4e8b-4593-8401-1ba799201143\") " pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602243 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f32934d-e69b-4b98-b6db-447125e38ae0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qj7w6\" (UID: \"6f32934d-e69b-4b98-b6db-447125e38ae0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602264 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4fc7dcb9-4d58-4912-b845-82e987928932-node-bootstrap-token\") pod \"machine-config-server-kt5pm\" (UID: \"4fc7dcb9-4d58-4912-b845-82e987928932\") " pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602296 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602316 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2955aa37-6387-4353-ba15-ee92c902d318-srv-cert\") pod \"olm-operator-6b444d44fb-5b48b\" (UID: \"2955aa37-6387-4353-ba15-ee92c902d318\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602363 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-service-ca\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602387 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602504 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33251e93-ea7b-4889-9822-f149f0331138-serving-cert\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602527 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6df9414-4e8b-4593-8401-1ba799201143-metrics-tls\") pod \"dns-default-6srzg\" (UID: \"f6df9414-4e8b-4593-8401-1ba799201143\") " pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602545 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-registration-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c78fffb-6965-4ef7-b534-d713a7dd3318-proxy-tls\") pod \"machine-config-controller-84d6567774-879gc\" (UID: \"7c78fffb-6965-4ef7-b534-d713a7dd3318\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602584 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/987bbc21-84aa-4e45-bb94-b0639da3c5c8-default-certificate\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-policies\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602622 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbnkf\" (UniqueName: \"kubernetes.io/projected/4fc7dcb9-4d58-4912-b845-82e987928932-kube-api-access-fbnkf\") pod \"machine-config-server-kt5pm\" (UID: \"4fc7dcb9-4d58-4912-b845-82e987928932\") " pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/189fd44f-cc62-480c-9c49-377810883c89-webhook-cert\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7qfs\" (UniqueName: \"kubernetes.io/projected/f86c7176-88c1-4cd6-ab81-af7df8e9923f-kube-api-access-m7qfs\") pod \"service-ca-9c57cc56f-sd99g\" (UID: \"f86c7176-88c1-4cd6-ab81-af7df8e9923f\") " pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602686 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602707 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/33251e93-ea7b-4889-9822-f149f0331138-etcd-service-ca\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602727 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwqcf\" (UniqueName: \"kubernetes.io/projected/33251e93-ea7b-4889-9822-f149f0331138-kube-api-access-kwqcf\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602757 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-trusted-ca\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602776 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/82314a42-a08b-4561-b24a-71e715d5d37f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ffvsn\" (UID: \"82314a42-a08b-4561-b24a-71e715d5d37f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602798 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602821 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmclt\" (UniqueName: \"kubernetes.io/projected/f08dd246-4139-4911-a4f8-4a5e693f12de-kube-api-access-vmclt\") pod \"kube-storage-version-migrator-operator-b67b599dd-ffzzt\" (UID: \"f08dd246-4139-4911-a4f8-4a5e693f12de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.602852 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.604240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/987bbc21-84aa-4e45-bb94-b0639da3c5c8-stats-auth\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: E0123 18:03:24.604463 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:25.1044476 +0000 UTC m=+148.106905533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.605267 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f86c7176-88c1-4cd6-ab81-af7df8e9923f-signing-cabundle\") pod \"service-ca-9c57cc56f-sd99g\" (UID: \"f86c7176-88c1-4cd6-ab81-af7df8e9923f\") " pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.608782 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9612c6f-0d03-4f5c-99d9-1e6681cea174-images\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.609681 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f86c7176-88c1-4cd6-ab81-af7df8e9923f-signing-key\") pod \"service-ca-9c57cc56f-sd99g\" (UID: \"f86c7176-88c1-4cd6-ab81-af7df8e9923f\") " pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.610795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9612c6f-0d03-4f5c-99d9-1e6681cea174-auth-proxy-config\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.611121 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/729e3c99-f28e-446a-af8d-d6f6225fadba-serving-cert\") pod \"service-ca-operator-777779d784-5v4zz\" (UID: \"729e3c99-f28e-446a-af8d-d6f6225fadba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.611752 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729e3c99-f28e-446a-af8d-d6f6225fadba-config\") pod \"service-ca-operator-777779d784-5v4zz\" (UID: \"729e3c99-f28e-446a-af8d-d6f6225fadba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.612286 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f32934d-e69b-4b98-b6db-447125e38ae0-config\") pod \"kube-apiserver-operator-766d6c64bb-qj7w6\" (UID: \"6f32934d-e69b-4b98-b6db-447125e38ae0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.612417 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c78fffb-6965-4ef7-b534-d713a7dd3318-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-879gc\" (UID: \"7c78fffb-6965-4ef7-b534-d713a7dd3318\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.612980 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-serving-cert\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.613049 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.614013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/987bbc21-84aa-4e45-bb94-b0639da3c5c8-service-ca-bundle\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.617018 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-trusted-ca\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.617745 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-config\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.618119 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33251e93-ea7b-4889-9822-f149f0331138-config\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.619521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-trusted-ca-bundle\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.619862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.619956 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08dd246-4139-4911-a4f8-4a5e693f12de-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ffzzt\" (UID: \"f08dd246-4139-4911-a4f8-4a5e693f12de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.620352 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.620421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9612c6f-0d03-4f5c-99d9-1e6681cea174-proxy-tls\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.622196 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.622246 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-dir\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.623148 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/189fd44f-cc62-480c-9c49-377810883c89-tmpfs\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.625647 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2955aa37-6387-4353-ba15-ee92c902d318-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5b48b\" (UID: \"2955aa37-6387-4353-ba15-ee92c902d318\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.625770 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.626031 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-tls\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.626198 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1747fa8-d09e-4415-ab0e-e607a674dfbb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.626612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/33251e93-ea7b-4889-9822-f149f0331138-etcd-ca\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.626948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-oauth-serving-cert\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.627789 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-certificates\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.627872 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-trusted-ca\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.627880 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-metrics-tls\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.628259 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-service-ca\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.628339 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/33251e93-ea7b-4889-9822-f149f0331138-etcd-service-ca\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.628424 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/987bbc21-84aa-4e45-bb94-b0639da3c5c8-default-certificate\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.628457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.628520 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1747fa8-d09e-4415-ab0e-e607a674dfbb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.634788 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.634894 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.635258 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/189fd44f-cc62-480c-9c49-377810883c89-apiservice-cert\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.635338 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/987bbc21-84aa-4e45-bb94-b0639da3c5c8-metrics-certs\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.635722 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-policies\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.635733 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f32934d-e69b-4b98-b6db-447125e38ae0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qj7w6\" (UID: \"6f32934d-e69b-4b98-b6db-447125e38ae0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.637068 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/189fd44f-cc62-480c-9c49-377810883c89-webhook-cert\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.637267 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/82314a42-a08b-4561-b24a-71e715d5d37f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ffvsn\" (UID: \"82314a42-a08b-4561-b24a-71e715d5d37f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.639646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kjc7\" (UniqueName: \"kubernetes.io/projected/82314a42-a08b-4561-b24a-71e715d5d37f-kube-api-access-4kjc7\") pod \"control-plane-machine-set-operator-78cbb6b69f-ffvsn\" (UID: \"82314a42-a08b-4561-b24a-71e715d5d37f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.649500 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/33251e93-ea7b-4889-9822-f149f0331138-etcd-client\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.650044 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.650227 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.651389 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-oauth-config\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.657694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.666089 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c78fffb-6965-4ef7-b534-d713a7dd3318-proxy-tls\") pod \"machine-config-controller-84d6567774-879gc\" (UID: \"7c78fffb-6965-4ef7-b534-d713a7dd3318\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.666448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2955aa37-6387-4353-ba15-ee92c902d318-srv-cert\") pod \"olm-operator-6b444d44fb-5b48b\" (UID: \"2955aa37-6387-4353-ba15-ee92c902d318\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.666660 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5068e6c-7f43-4692-b8df-968ca0907a62-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wl57f\" (UID: \"d5068e6c-7f43-4692-b8df-968ca0907a62\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.668674 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33251e93-ea7b-4889-9822-f149f0331138-serving-cert\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.668862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08dd246-4139-4911-a4f8-4a5e693f12de-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ffzzt\" (UID: \"f08dd246-4139-4911-a4f8-4a5e693f12de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.670349 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.671107 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.696306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jmq\" (UniqueName: \"kubernetes.io/projected/7f3d065c-0f4b-418d-8f05-a147000a9113-kube-api-access-45jmq\") pod \"migrator-59844c95c7-zbz2j\" (UID: \"7f3d065c-0f4b-418d-8f05-a147000a9113\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703624 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-plugins-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703667 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff72bd3-e963-4c1b-8187-dbff951b1f8d-config-volume\") pod \"collect-profiles-29486520-pcprg\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703715 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf8fb\" (UniqueName: \"kubernetes.io/projected/aff72bd3-e963-4c1b-8187-dbff951b1f8d-kube-api-access-kf8fb\") pod \"collect-profiles-29486520-pcprg\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703736 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-csi-data-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703756 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x4lj\" (UniqueName: \"kubernetes.io/projected/096ce378-9dd6-4c27-a30b-1c619d486ccb-kube-api-access-5x4lj\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703774 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37b6e215-7d18-4578-8e22-90068a8dabf6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdqvd\" (UID: \"37b6e215-7d18-4578-8e22-90068a8dabf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703802 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-mountpoint-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703823 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9rx8\" (UniqueName: \"kubernetes.io/projected/b570e412-35e6-4feb-a5b4-1a198b486a39-kube-api-access-w9rx8\") pod \"ingress-canary-526r6\" (UID: \"b570e412-35e6-4feb-a5b4-1a198b486a39\") " pod="openshift-ingress-canary/ingress-canary-526r6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703841 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411040ce-40d3-4347-89d0-84f1a51bc0e7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jq7dg\" (UID: \"411040ce-40d3-4347-89d0-84f1a51bc0e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703868 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b6e215-7d18-4578-8e22-90068a8dabf6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdqvd\" (UID: \"37b6e215-7d18-4578-8e22-90068a8dabf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703886 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-socket-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b570e412-35e6-4feb-a5b4-1a198b486a39-cert\") pod \"ingress-canary-526r6\" (UID: \"b570e412-35e6-4feb-a5b4-1a198b486a39\") " pod="openshift-ingress-canary/ingress-canary-526r6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aff72bd3-e963-4c1b-8187-dbff951b1f8d-secret-volume\") pod \"collect-profiles-29486520-pcprg\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.703983 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4fc7dcb9-4d58-4912-b845-82e987928932-certs\") pod \"machine-config-server-kt5pm\" (UID: \"4fc7dcb9-4d58-4912-b845-82e987928932\") " pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.704001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqt22\" (UniqueName: \"kubernetes.io/projected/411040ce-40d3-4347-89d0-84f1a51bc0e7-kube-api-access-tqt22\") pod \"openshift-apiserver-operator-796bbdcf4f-jq7dg\" (UID: \"411040ce-40d3-4347-89d0-84f1a51bc0e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.704021 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b6e215-7d18-4578-8e22-90068a8dabf6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdqvd\" (UID: \"37b6e215-7d18-4578-8e22-90068a8dabf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.704039 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6df9414-4e8b-4593-8401-1ba799201143-config-volume\") pod \"dns-default-6srzg\" (UID: \"f6df9414-4e8b-4593-8401-1ba799201143\") " pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.704058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/411040ce-40d3-4347-89d0-84f1a51bc0e7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jq7dg\" (UID: \"411040ce-40d3-4347-89d0-84f1a51bc0e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.704090 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58jlm\" (UniqueName: \"kubernetes.io/projected/f6df9414-4e8b-4593-8401-1ba799201143-kube-api-access-58jlm\") pod \"dns-default-6srzg\" (UID: \"f6df9414-4e8b-4593-8401-1ba799201143\") " pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.704114 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4fc7dcb9-4d58-4912-b845-82e987928932-node-bootstrap-token\") pod \"machine-config-server-kt5pm\" (UID: \"4fc7dcb9-4d58-4912-b845-82e987928932\") " pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.704137 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.704160 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6df9414-4e8b-4593-8401-1ba799201143-metrics-tls\") pod \"dns-default-6srzg\" (UID: \"f6df9414-4e8b-4593-8401-1ba799201143\") " pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.704177 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-registration-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.704202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbnkf\" (UniqueName: \"kubernetes.io/projected/4fc7dcb9-4d58-4912-b845-82e987928932-kube-api-access-fbnkf\") pod \"machine-config-server-kt5pm\" (UID: \"4fc7dcb9-4d58-4912-b845-82e987928932\") " pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.705674 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-plugins-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.706477 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff72bd3-e963-4c1b-8187-dbff951b1f8d-config-volume\") pod \"collect-profiles-29486520-pcprg\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.706616 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-csi-data-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.706745 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-mountpoint-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.707289 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411040ce-40d3-4347-89d0-84f1a51bc0e7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jq7dg\" (UID: \"411040ce-40d3-4347-89d0-84f1a51bc0e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.707879 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6df9414-4e8b-4593-8401-1ba799201143-config-volume\") pod \"dns-default-6srzg\" (UID: \"f6df9414-4e8b-4593-8401-1ba799201143\") " pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.711612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-registration-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: E0123 18:03:24.711949 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:25.211934386 +0000 UTC m=+148.214392319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.712839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b6e215-7d18-4578-8e22-90068a8dabf6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdqvd\" (UID: \"37b6e215-7d18-4578-8e22-90068a8dabf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.712931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/096ce378-9dd6-4c27-a30b-1c619d486ccb-socket-dir\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.716028 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6df9414-4e8b-4593-8401-1ba799201143-metrics-tls\") pod \"dns-default-6srzg\" (UID: \"f6df9414-4e8b-4593-8401-1ba799201143\") " pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.717005 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/411040ce-40d3-4347-89d0-84f1a51bc0e7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jq7dg\" (UID: \"411040ce-40d3-4347-89d0-84f1a51bc0e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.717030 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjs4r\" (UniqueName: \"kubernetes.io/projected/729e3c99-f28e-446a-af8d-d6f6225fadba-kube-api-access-tjs4r\") pod \"service-ca-operator-777779d784-5v4zz\" (UID: \"729e3c99-f28e-446a-af8d-d6f6225fadba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.717097 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.717244 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4fc7dcb9-4d58-4912-b845-82e987928932-node-bootstrap-token\") pod \"machine-config-server-kt5pm\" (UID: \"4fc7dcb9-4d58-4912-b845-82e987928932\") " pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.719814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b570e412-35e6-4feb-a5b4-1a198b486a39-cert\") pod \"ingress-canary-526r6\" (UID: \"b570e412-35e6-4feb-a5b4-1a198b486a39\") " pod="openshift-ingress-canary/ingress-canary-526r6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.720417 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aff72bd3-e963-4c1b-8187-dbff951b1f8d-secret-volume\") pod \"collect-profiles-29486520-pcprg\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.720858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b6e215-7d18-4578-8e22-90068a8dabf6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdqvd\" (UID: \"37b6e215-7d18-4578-8e22-90068a8dabf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.765931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4fc7dcb9-4d58-4912-b845-82e987928932-certs\") pod \"machine-config-server-kt5pm\" (UID: \"4fc7dcb9-4d58-4912-b845-82e987928932\") " pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.770807 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.774700 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qkh24"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.777801 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-h45nc"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.789641 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.791372 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl8vn\" (UniqueName: \"kubernetes.io/projected/987bbc21-84aa-4e45-bb94-b0639da3c5c8-kube-api-access-gl8vn\") pod \"router-default-5444994796-qd549\" (UID: \"987bbc21-84aa-4e45-bb94-b0639da3c5c8\") " pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.795069 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-bound-sa-token\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.795472 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7qfs\" (UniqueName: \"kubernetes.io/projected/f86c7176-88c1-4cd6-ab81-af7df8e9923f-kube-api-access-m7qfs\") pod \"service-ca-9c57cc56f-sd99g\" (UID: \"f86c7176-88c1-4cd6-ab81-af7df8e9923f\") " pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.799473 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.801205 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsn8n\" (UniqueName: \"kubernetes.io/projected/d3f94f74-4a2c-419a-b73f-c654dbf783b5-kube-api-access-nsn8n\") pod \"console-f9d7485db-h2x2h\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.801464 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.805829 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:24 crc kubenswrapper[4760]: E0123 18:03:24.806109 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:25.306085064 +0000 UTC m=+148.308542997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.806361 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: E0123 18:03:24.806919 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:25.306903646 +0000 UTC m=+148.309361579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.819373 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6fm8\" (UniqueName: \"kubernetes.io/projected/2955aa37-6387-4353-ba15-ee92c902d318-kube-api-access-k6fm8\") pod \"olm-operator-6b444d44fb-5b48b\" (UID: \"2955aa37-6387-4353-ba15-ee92c902d318\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.837581 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.852937 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vgmz\" (UniqueName: \"kubernetes.io/projected/b9612c6f-0d03-4f5c-99d9-1e6681cea174-kube-api-access-5vgmz\") pod \"machine-config-operator-74547568cd-krv88\" (UID: \"b9612c6f-0d03-4f5c-99d9-1e6681cea174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.853505 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thqw2\" (UniqueName: \"kubernetes.io/projected/8f888aa2-f299-458e-bf2a-6f8aaadc6ebd-kube-api-access-thqw2\") pod \"ingress-operator-5b745b69d9-sqwjm\" (UID: \"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.854984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.867116 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.875149 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.875294 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbxbc\" (UniqueName: \"kubernetes.io/projected/37543438-dc29-44d6-a46e-8864aa3fcad4-kube-api-access-qbxbc\") pod \"oauth-openshift-558db77b4-s4mrj\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.880949 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-l7lg7"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.895589 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98nph\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-kube-api-access-98nph\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.908251 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:24 crc kubenswrapper[4760]: E0123 18:03:24.908753 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:25.408738256 +0000 UTC m=+148.411196189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.911087 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.924905 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.926250 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wthbs\" (UniqueName: \"kubernetes.io/projected/7c78fffb-6965-4ef7-b534-d713a7dd3318-kube-api-access-wthbs\") pod \"machine-config-controller-84d6567774-879gc\" (UID: \"7c78fffb-6965-4ef7-b534-d713a7dd3318\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.944481 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwmf6\" (UniqueName: \"kubernetes.io/projected/d5068e6c-7f43-4692-b8df-968ca0907a62-kube-api-access-gwmf6\") pod \"multus-admission-controller-857f4d67dd-wl57f\" (UID: \"d5068e6c-7f43-4692-b8df-968ca0907a62\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.956603 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49s9b"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.957621 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f32934d-e69b-4b98-b6db-447125e38ae0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qj7w6\" (UID: \"6f32934d-e69b-4b98-b6db-447125e38ae0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.972267 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtxd4\" (UniqueName: \"kubernetes.io/projected/189fd44f-cc62-480c-9c49-377810883c89-kube-api-access-rtxd4\") pod \"packageserver-d55dfcdfc-qq44z\" (UID: \"189fd44f-cc62-480c-9c49-377810883c89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.993120 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwqcf\" (UniqueName: \"kubernetes.io/projected/33251e93-ea7b-4889-9822-f149f0331138-kube-api-access-kwqcf\") pod \"etcd-operator-b45778765-pmhlc\" (UID: \"33251e93-ea7b-4889-9822-f149f0331138\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.996012 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lqr88"] Jan 23 18:03:24 crc kubenswrapper[4760]: I0123 18:03:24.996080 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw"] Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.000669 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.008508 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.009843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:25 crc kubenswrapper[4760]: E0123 18:03:25.010239 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:25.510218615 +0000 UTC m=+148.512676548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.016496 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm"] Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.019873 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmclt\" (UniqueName: \"kubernetes.io/projected/f08dd246-4139-4911-a4f8-4a5e693f12de-kube-api-access-vmclt\") pod \"kube-storage-version-migrator-operator-b67b599dd-ffzzt\" (UID: \"f08dd246-4139-4911-a4f8-4a5e693f12de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:25 crc kubenswrapper[4760]: W0123 18:03:25.025602 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5ec118f_1c98_4ea4_870a_7ced4b2303e5.slice/crio-451a94b21ab9c0d435b9ac441d486ff975346b0f6a10725ced94a6772f2d1cbc WatchSource:0}: Error finding container 451a94b21ab9c0d435b9ac441d486ff975346b0f6a10725ced94a6772f2d1cbc: Status 404 returned error can't find the container with id 451a94b21ab9c0d435b9ac441d486ff975346b0f6a10725ced94a6772f2d1cbc Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.034843 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.044505 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.053214 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.056606 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbnkf\" (UniqueName: \"kubernetes.io/projected/4fc7dcb9-4d58-4912-b845-82e987928932-kube-api-access-fbnkf\") pod \"machine-config-server-kt5pm\" (UID: \"4fc7dcb9-4d58-4912-b845-82e987928932\") " pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.064589 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.079692 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:25 crc kubenswrapper[4760]: W0123 18:03:25.083535 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff160511_4992_4cb9_b103_645a1dd82f55.slice/crio-4f6d03db9b80ad0b397d81b7e185a0353cf9e6e9958d7398ea3cc13938fd2985 WatchSource:0}: Error finding container 4f6d03db9b80ad0b397d81b7e185a0353cf9e6e9958d7398ea3cc13938fd2985: Status 404 returned error can't find the container with id 4f6d03db9b80ad0b397d81b7e185a0353cf9e6e9958d7398ea3cc13938fd2985 Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.124620 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:25 crc kubenswrapper[4760]: E0123 18:03:25.125304 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:25.625288022 +0000 UTC m=+148.627745955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.128922 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.133711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf8fb\" (UniqueName: \"kubernetes.io/projected/aff72bd3-e963-4c1b-8187-dbff951b1f8d-kube-api-access-kf8fb\") pod \"collect-profiles-29486520-pcprg\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.145023 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.145566 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x4lj\" (UniqueName: \"kubernetes.io/projected/096ce378-9dd6-4c27-a30b-1c619d486ccb-kube-api-access-5x4lj\") pod \"csi-hostpathplugin-b9mm2\" (UID: \"096ce378-9dd6-4c27-a30b-1c619d486ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.152140 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37b6e215-7d18-4578-8e22-90068a8dabf6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdqvd\" (UID: \"37b6e215-7d18-4578-8e22-90068a8dabf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.173700 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9rx8\" (UniqueName: \"kubernetes.io/projected/b570e412-35e6-4feb-a5b4-1a198b486a39-kube-api-access-w9rx8\") pod \"ingress-canary-526r6\" (UID: \"b570e412-35e6-4feb-a5b4-1a198b486a39\") " pod="openshift-ingress-canary/ingress-canary-526r6" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.187322 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58jlm\" (UniqueName: \"kubernetes.io/projected/f6df9414-4e8b-4593-8401-1ba799201143-kube-api-access-58jlm\") pod \"dns-default-6srzg\" (UID: \"f6df9414-4e8b-4593-8401-1ba799201143\") " pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.187544 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.197294 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.207305 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kt5pm" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.215553 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqt22\" (UniqueName: \"kubernetes.io/projected/411040ce-40d3-4347-89d0-84f1a51bc0e7-kube-api-access-tqt22\") pod \"openshift-apiserver-operator-796bbdcf4f-jq7dg\" (UID: \"411040ce-40d3-4347-89d0-84f1a51bc0e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.215869 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-526r6" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.235073 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:25 crc kubenswrapper[4760]: E0123 18:03:25.235492 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:25.735477594 +0000 UTC m=+148.737935527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.239656 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.242020 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.336022 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:25 crc kubenswrapper[4760]: E0123 18:03:25.336587 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:25.836568513 +0000 UTC m=+148.839026446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.387569 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn"] Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.425168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qd549" event={"ID":"987bbc21-84aa-4e45-bb94-b0639da3c5c8","Type":"ContainerStarted","Data":"0142c93c7f3cf461d0916645e092fcbb31207bee294f077b68d6cd2fa4b72f0b"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.434501 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" event={"ID":"c5ec118f-1c98-4ea4-870a-7ced4b2303e5","Type":"ContainerStarted","Data":"451a94b21ab9c0d435b9ac441d486ff975346b0f6a10725ced94a6772f2d1cbc"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.435936 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" event={"ID":"c27b9e2a-0800-4159-9c97-7e46c2e546c1","Type":"ContainerStarted","Data":"860f88cc648c7ba44fc2dbd27eb76d810853090dd2df86442e5d5455a29dfab6"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.435959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" event={"ID":"c27b9e2a-0800-4159-9c97-7e46c2e546c1","Type":"ContainerStarted","Data":"ad35214b65eccf9d11d7f28df9f475e9a750792590544a2234508be3c4e19dd6"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.436530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" event={"ID":"c08fd2c2-1700-4296-a369-62c3c9928a63","Type":"ContainerStarted","Data":"5d521be64f771b9be91e9c0391f07b67c5487d451f03198674bcb792751eec74"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.437357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" event={"ID":"ff160511-4992-4cb9-b103-645a1dd82f55","Type":"ContainerStarted","Data":"4f6d03db9b80ad0b397d81b7e185a0353cf9e6e9958d7398ea3cc13938fd2985"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.438715 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:25 crc kubenswrapper[4760]: E0123 18:03:25.439032 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:25.939018629 +0000 UTC m=+148.941476562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.449528 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" event={"ID":"4648a70d-5af3-459c-815f-d12089d27b88","Type":"ContainerStarted","Data":"26e1d57aa642c176b8fcac7f89f28f15724e9a42d6b8eb4df0d4b6cdf90d05c7"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.454441 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wv6zt" event={"ID":"c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916","Type":"ContainerStarted","Data":"23329981840b72df4ed968cd5f7e6d0fd38935dd58766619e58110ae21a910bd"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.455309 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.458285 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" event={"ID":"f0dadab3-41d3-4f5e-bd3f-7860880717b1","Type":"ContainerStarted","Data":"8b42f1d70cf4fdac51e91e5dba61cf84b6bcd229016383e00c5b103d273b7bb2"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.458324 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" event={"ID":"f0dadab3-41d3-4f5e-bd3f-7860880717b1","Type":"ContainerStarted","Data":"970b546db1b3341f1eeaab5d3dd498af1e56fea8d1abbc4620ea7516d79af10d"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.459801 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" event={"ID":"04f83f2f-9119-42d7-b712-06dc0ef0adfd","Type":"ContainerStarted","Data":"f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.460303 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.491228 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.524460 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" event={"ID":"8a30bb8e-9df6-4c48-8532-ad9280521fb3","Type":"ContainerStarted","Data":"307efdc0fed7c343af6f0907f77f058c0bd6564d4548fd6718c0e6efbf583488"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.525819 4760 generic.go:334] "Generic (PLEG): container finished" podID="b10c111b-45c0-4b24-9644-39c0ce99342d" containerID="0fcc0731994185e23934edccce406f3dff9363c037140fe3b28cad704f4e1b35" exitCode=0 Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.526425 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" event={"ID":"b10c111b-45c0-4b24-9644-39c0ce99342d","Type":"ContainerDied","Data":"0fcc0731994185e23934edccce406f3dff9363c037140fe3b28cad704f4e1b35"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.526448 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" event={"ID":"b10c111b-45c0-4b24-9644-39c0ce99342d","Type":"ContainerStarted","Data":"efb46f798657bbd2506e615a92675b6500c2067ef14fcefa01aae0f77020f53c"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.527685 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" event={"ID":"faa0722c-1acf-445e-8785-a8030be562b6","Type":"ContainerStarted","Data":"eaf1d421a1b871dad5943a42e4e901da92e98415ad28de21b663c9ce23e08247"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.528691 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" event={"ID":"1dba0724-514b-42cb-a0e5-9fc8caa3844f","Type":"ContainerStarted","Data":"e482d956d1d2b62cfc05f4fbaa274a9b72e0ceca56889d76e5cda6483a5380dc"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.528709 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" event={"ID":"1dba0724-514b-42cb-a0e5-9fc8caa3844f","Type":"ContainerStarted","Data":"1ff7f756037ccd1ff41670832f0bde5f0b10ccb0e327d965cc260b223a1b7f4f"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.530252 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h45nc" event={"ID":"61a396e8-372f-4982-8994-d60baa42da95","Type":"ContainerStarted","Data":"3df2efc10fb0d60abc8100ec1cc811579828014e66c3e8f73360407c3d0d183e"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.531364 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" event={"ID":"7106b645-deaf-47b1-9d00-5050fdd7b040","Type":"ContainerStarted","Data":"404faa543b724e4f656cb3d826fc46dedf85b1aafed3bdb256059a9b36e73630"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.532162 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" event={"ID":"678b518e-f9ae-4fd0-bc08-b0489bf0aa07","Type":"ContainerStarted","Data":"db305e8d54634e6c4b91bf50b20b0c6dc769416598e764ea9d6200120cbb07eb"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.533027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" event={"ID":"e359c6a3-6164-43d1-823e-829223bb3605","Type":"ContainerStarted","Data":"33d148335cd59ba123144c532df6a05a86dead7c2186a4a0067f92ac5c6ff261"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.540068 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:25 crc kubenswrapper[4760]: E0123 18:03:25.540909 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.040884901 +0000 UTC m=+149.043342834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.651699 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:25 crc kubenswrapper[4760]: E0123 18:03:25.652118 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.152103081 +0000 UTC m=+149.154561014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.740625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" event={"ID":"a9e97eeb-2c62-421e-b81a-875190083260","Type":"ContainerStarted","Data":"fe7d07cecac5933fcb7c7b72ccc45ebe3f7aa73fb2b3e0e0463f46d8a817be81"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.740687 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" event={"ID":"a9e97eeb-2c62-421e-b81a-875190083260","Type":"ContainerStarted","Data":"859ab5eff3e52d00cc2512a378423e5d690b9b5ba45c3ea6b1e0e238b27be67d"} Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.750643 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.753568 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.761724 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:25 crc kubenswrapper[4760]: E0123 18:03:25.772031 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.27197918 +0000 UTC m=+149.274437123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.772382 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:25 crc kubenswrapper[4760]: E0123 18:03:25.773983 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.273971945 +0000 UTC m=+149.276429888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:25 crc kubenswrapper[4760]: I0123 18:03:25.899251 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:25 crc kubenswrapper[4760]: E0123 18:03:25.899612 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.399598114 +0000 UTC m=+149.402056047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.009843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.010220 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.510206806 +0000 UTC m=+149.512664739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.062458 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.083094 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.092023 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h2x2h"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.093862 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.113199 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.113316 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.613289051 +0000 UTC m=+149.615746984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.113587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.113942 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.613932468 +0000 UTC m=+149.616390391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.215166 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.215351 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.715322797 +0000 UTC m=+149.717780740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.215870 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.216345 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.716330704 +0000 UTC m=+149.718788637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.235172 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.248298 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" podStartSLOduration=130.248276869 podStartE2EDuration="2m10.248276869s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:26.245722938 +0000 UTC m=+149.248180881" watchObservedRunningTime="2026-01-23 18:03:26.248276869 +0000 UTC m=+149.250734802" Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.287502 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-x2z6h" podStartSLOduration=130.287466754 podStartE2EDuration="2m10.287466754s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:26.28480227 +0000 UTC m=+149.287260203" watchObservedRunningTime="2026-01-23 18:03:26.287466754 +0000 UTC m=+149.289924687" Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.316328 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.316690 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.816672373 +0000 UTC m=+149.819130306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.366488 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-wv6zt" podStartSLOduration=130.366469042 podStartE2EDuration="2m10.366469042s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:26.365008061 +0000 UTC m=+149.367466004" watchObservedRunningTime="2026-01-23 18:03:26.366469042 +0000 UTC m=+149.368926995" Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.404852 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zjqml" podStartSLOduration=130.404814783 podStartE2EDuration="2m10.404814783s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:26.403774565 +0000 UTC m=+149.406232508" watchObservedRunningTime="2026-01-23 18:03:26.404814783 +0000 UTC m=+149.407272716" Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.418108 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.418597 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:26.918578105 +0000 UTC m=+149.921036038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.443388 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sd99g"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.445780 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-krv88"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.455198 4760 patch_prober.go:28] interesting pod/console-operator-58897d9998-wv6zt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.455267 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wv6zt" podUID="c7e7e23f-1e60-4d49-aaf8-d8e39f5f8916" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.498688 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" podStartSLOduration=129.498669053 podStartE2EDuration="2m9.498669053s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:26.495141594 +0000 UTC m=+149.497599527" watchObservedRunningTime="2026-01-23 18:03:26.498669053 +0000 UTC m=+149.501126986" Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.505320 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pmhlc"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.505492 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wl57f"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.507237 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.517144 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm"] Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.520047 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.520338 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:27.020323802 +0000 UTC m=+150.022781735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.569360 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt"] Jan 23 18:03:26 crc kubenswrapper[4760]: W0123 18:03:26.571983 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f32934d_e69b_4b98_b6db_447125e38ae0.slice/crio-bb26bccfdee47c51d3be935b8a43633ca9fdfcc6f11993da77f53f898b042d31 WatchSource:0}: Error finding container bb26bccfdee47c51d3be935b8a43633ca9fdfcc6f11993da77f53f898b042d31: Status 404 returned error can't find the container with id bb26bccfdee47c51d3be935b8a43633ca9fdfcc6f11993da77f53f898b042d31 Jan 23 18:03:26 crc kubenswrapper[4760]: W0123 18:03:26.584219 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf86c7176_88c1_4cd6_ab81_af7df8e9923f.slice/crio-2f1be942ca81e97cbd8d17f51bdaf4196263b6eefa42f90e49cffb0448b9d2d6 WatchSource:0}: Error finding container 2f1be942ca81e97cbd8d17f51bdaf4196263b6eefa42f90e49cffb0448b9d2d6: Status 404 returned error can't find the container with id 2f1be942ca81e97cbd8d17f51bdaf4196263b6eefa42f90e49cffb0448b9d2d6 Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.621256 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.621857 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:27.121845184 +0000 UTC m=+150.124303117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.722985 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.723543 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:27.223523039 +0000 UTC m=+150.225980972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.786938 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" event={"ID":"2955aa37-6387-4353-ba15-ee92c902d318","Type":"ContainerStarted","Data":"c6aa67549d7ff03633cd0b44a729649ddf8f870abded6b98d680cb9a9a545005"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.791633 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" event={"ID":"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd","Type":"ContainerStarted","Data":"addae33d86bd11a987e66829efb7a4ad1142005f006fc5d2dafd7618ca63256b"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.825457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.825838 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:27.325824301 +0000 UTC m=+150.328282234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.841632 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" event={"ID":"729e3c99-f28e-446a-af8d-d6f6225fadba","Type":"ContainerStarted","Data":"60193da4787d9da43bef8b982f0e5e12d95fd2416259ac93bc39e42dbb511820"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.889025 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kt5pm" event={"ID":"4fc7dcb9-4d58-4912-b845-82e987928932","Type":"ContainerStarted","Data":"7a8085399850557ffdfb58bc57b548d9e57dabe014b9f13e0eb6e03c9009ed0e"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.893580 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j" event={"ID":"7f3d065c-0f4b-418d-8f05-a147000a9113","Type":"ContainerStarted","Data":"80462824d40107f5c170b44fbddc0442666abd561788fc26c9910dfd8f6df75c"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.895840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" event={"ID":"82314a42-a08b-4561-b24a-71e715d5d37f","Type":"ContainerStarted","Data":"5ad55682685970103dfd2628ea9b90910b05fd777094cf66faa6791749a1e53b"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.901488 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" event={"ID":"6f32934d-e69b-4b98-b6db-447125e38ae0","Type":"ContainerStarted","Data":"bb26bccfdee47c51d3be935b8a43633ca9fdfcc6f11993da77f53f898b042d31"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.903046 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" event={"ID":"f08dd246-4139-4911-a4f8-4a5e693f12de","Type":"ContainerStarted","Data":"d99554437ba880673e090f3b6525e4fddaf13e24262d95d28119f0b831ca82e8"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.904428 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" event={"ID":"f86c7176-88c1-4cd6-ab81-af7df8e9923f","Type":"ContainerStarted","Data":"2f1be942ca81e97cbd8d17f51bdaf4196263b6eefa42f90e49cffb0448b9d2d6"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.915470 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" event={"ID":"d5068e6c-7f43-4692-b8df-968ca0907a62","Type":"ContainerStarted","Data":"6de6f6d49dbc606933d227de3635eca4e4244bb29af169b42f66e3d58c8ffdec"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.926881 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:26 crc kubenswrapper[4760]: E0123 18:03:26.927154 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:27.427140078 +0000 UTC m=+150.429598011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.945605 4760 generic.go:334] "Generic (PLEG): container finished" podID="a9e97eeb-2c62-421e-b81a-875190083260" containerID="fe7d07cecac5933fcb7c7b72ccc45ebe3f7aa73fb2b3e0e0463f46d8a817be81" exitCode=0 Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.945675 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" event={"ID":"a9e97eeb-2c62-421e-b81a-875190083260","Type":"ContainerDied","Data":"fe7d07cecac5933fcb7c7b72ccc45ebe3f7aa73fb2b3e0e0463f46d8a817be81"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.949959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" event={"ID":"b9612c6f-0d03-4f5c-99d9-1e6681cea174","Type":"ContainerStarted","Data":"2f674b4df1f380681fb2b4bf2bbd29c7eb50f1af6e3faee97b0d7eec9f98088c"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.954745 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" event={"ID":"33251e93-ea7b-4889-9822-f149f0331138","Type":"ContainerStarted","Data":"be44891dccd07fb14895a0b13fcaef3343b25991367ca8c8d895a36c01038abe"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.960573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" event={"ID":"189fd44f-cc62-480c-9c49-377810883c89","Type":"ContainerStarted","Data":"835c7ba6a9dd7e1b65e02daaee044307e347fb686632bf10f9f7d5d7149024a0"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.969369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h2x2h" event={"ID":"d3f94f74-4a2c-419a-b73f-c654dbf783b5","Type":"ContainerStarted","Data":"a83acfc56c52605edad348df70d880e935604a4368c5a9993662eb5a036fa922"} Jan 23 18:03:26 crc kubenswrapper[4760]: I0123 18:03:26.990083 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wv6zt" Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.028684 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.028979 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:27.528967887 +0000 UTC m=+150.531425820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.129175 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.130607 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:27.630585771 +0000 UTC m=+150.633043704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.231150 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.231424 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:27.731400042 +0000 UTC m=+150.733857975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.336627 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.337339 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:27.837322926 +0000 UTC m=+150.839780859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.440268 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.440556 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:27.940543924 +0000 UTC m=+150.943001857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.489903 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd"] Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.535199 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg"] Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.542107 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.542440 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.042425515 +0000 UTC m=+151.044883448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.546813 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6srzg"] Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.644156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.644785 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.144772939 +0000 UTC m=+151.147230872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.727736 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s4mrj"] Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.744603 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-526r6"] Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.745065 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.745612 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.245593991 +0000 UTC m=+151.248051924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.755234 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-879gc"] Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.758094 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg"] Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.844891 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b9mm2"] Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.850348 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.850750 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.350737652 +0000 UTC m=+151.353195585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: W0123 18:03:27.891299 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaff72bd3_e963_4c1b_8187_dbff951b1f8d.slice/crio-5c546b89232fc55884160eb691f45fe240c18e4e1d2f28eb9b3335cca90525e6 WatchSource:0}: Error finding container 5c546b89232fc55884160eb691f45fe240c18e4e1d2f28eb9b3335cca90525e6: Status 404 returned error can't find the container with id 5c546b89232fc55884160eb691f45fe240c18e4e1d2f28eb9b3335cca90525e6 Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.961490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.961901 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.46187927 +0000 UTC m=+151.464337203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: E0123 18:03:27.963195 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.463181476 +0000 UTC m=+151.465639409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:27 crc kubenswrapper[4760]: I0123 18:03:27.961995 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.034327 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" event={"ID":"37b6e215-7d18-4578-8e22-90068a8dabf6","Type":"ContainerStarted","Data":"4215d8ba8ba1d0510c391659bd37066dad7e187a2327cd54f4abbab572084a34"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.070146 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" event={"ID":"7106b645-deaf-47b1-9d00-5050fdd7b040","Type":"ContainerStarted","Data":"62984e774ea701ec3386ea7348b41ad61c2075c6f755e60ac7e1f5868efb4523"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.072251 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.077625 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:28 crc kubenswrapper[4760]: E0123 18:03:28.079733 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.579699443 +0000 UTC m=+151.582157376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.081952 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:28 crc kubenswrapper[4760]: E0123 18:03:28.082216 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.582206482 +0000 UTC m=+151.584664415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.108719 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-49s9b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.108798 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" podUID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.111451 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" event={"ID":"33251e93-ea7b-4889-9822-f149f0331138","Type":"ContainerStarted","Data":"1f63c1b14e56f7b21324240ee4cb30d4a3b63e672d6a474d392b38f5128b1f8b"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.145741 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h2x2h" event={"ID":"d3f94f74-4a2c-419a-b73f-c654dbf783b5","Type":"ContainerStarted","Data":"7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.181849 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" event={"ID":"c08fd2c2-1700-4296-a369-62c3c9928a63","Type":"ContainerStarted","Data":"3ea72b5c1f97bd31185dc682d4de59ba96119130ce970e6066764865e2c9d250"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.189349 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:28 crc kubenswrapper[4760]: E0123 18:03:28.191574 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.69155043 +0000 UTC m=+151.694008413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.251919 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h45nc" event={"ID":"61a396e8-372f-4982-8994-d60baa42da95","Type":"ContainerStarted","Data":"822d15aabe32f223ef335821c675c3f70b10c5e5ebf6ef8814001117c6dc9d10"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.253102 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-h45nc" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.279271 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-h45nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.279332 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h45nc" podUID="61a396e8-372f-4982-8994-d60baa42da95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.292861 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:28 crc kubenswrapper[4760]: E0123 18:03:28.293367 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.793347348 +0000 UTC m=+151.795805331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.337129 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" event={"ID":"b9612c6f-0d03-4f5c-99d9-1e6681cea174","Type":"ContainerStarted","Data":"0a97e7e675589b6a1b4ebc9b2e1eb07f1e60ba7add6dbcc7e66382b85b2967be"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.391315 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" event={"ID":"189fd44f-cc62-480c-9c49-377810883c89","Type":"ContainerStarted","Data":"fe81c0a3e9be603f33d45c7137d6353c8d2bf3bdb4f7ebcc7b784838755a6872"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.392127 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.394212 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:28 crc kubenswrapper[4760]: E0123 18:03:28.394534 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:28.89451578 +0000 UTC m=+151.896973713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.403119 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j" event={"ID":"7f3d065c-0f4b-418d-8f05-a147000a9113","Type":"ContainerStarted","Data":"4a0e45acf8d523074b495204ac5dbbc77f8f66dd33b5cacf297f5974d6a0c119"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.420741 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" event={"ID":"411040ce-40d3-4347-89d0-84f1a51bc0e7","Type":"ContainerStarted","Data":"6679329d63201dde105b2b694f7ad4717ec1376f31e2736912804ceafa951864"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.423812 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" event={"ID":"aff72bd3-e963-4c1b-8187-dbff951b1f8d","Type":"ContainerStarted","Data":"5c546b89232fc55884160eb691f45fe240c18e4e1d2f28eb9b3335cca90525e6"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.427232 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" event={"ID":"b10c111b-45c0-4b24-9644-39c0ce99342d","Type":"ContainerStarted","Data":"62e04357d0a8f4ab0a997491caee5b4c600c12dd52a94436ade0acf86f02db9d"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.443621 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" event={"ID":"678b518e-f9ae-4fd0-bc08-b0489bf0aa07","Type":"ContainerStarted","Data":"42249643b91b81800b9daed2e53dcaa9b0c53441081a4684766c81d1454c04ec"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.445986 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6srzg" event={"ID":"f6df9414-4e8b-4593-8401-1ba799201143","Type":"ContainerStarted","Data":"34f7eafafbc7da06ba0cd6b2b0ab0e16b90a6c402e3da301f5db48e78ef97b23"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.458661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" event={"ID":"82314a42-a08b-4561-b24a-71e715d5d37f","Type":"ContainerStarted","Data":"a7afbb4c1afbecd5054da0152db400948d58987ddcb983bf76291aea330fb43d"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.468671 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" event={"ID":"ff160511-4992-4cb9-b103-645a1dd82f55","Type":"ContainerStarted","Data":"d53ee1c4009c06ddf3fcf2f0717d2d6965781f018c8a182bdc1e178e5456cffa"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.471157 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.483893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qd549" event={"ID":"987bbc21-84aa-4e45-bb94-b0639da3c5c8","Type":"ContainerStarted","Data":"475fba9af57ea4ac7e6efe8a2e89e656aae3a64ec35e8979be6fc8abc6a45539"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.497423 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:28 crc kubenswrapper[4760]: E0123 18:03:28.510037 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.010017489 +0000 UTC m=+152.012475432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.529794 4760 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mdqhm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.529856 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" podUID="ff160511-4992-4cb9-b103-645a1dd82f55" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.580484 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" event={"ID":"c27b9e2a-0800-4159-9c97-7e46c2e546c1","Type":"ContainerStarted","Data":"c5765ca5a34f73d9b201751019ab3d5da5cdbcc43bbe22e1d68cfe24efddcf9e"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.593795 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-h45nc" podStartSLOduration=132.593775198 podStartE2EDuration="2m12.593775198s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:28.495903578 +0000 UTC m=+151.498361511" watchObservedRunningTime="2026-01-23 18:03:28.593775198 +0000 UTC m=+151.596233131" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.594201 4760 generic.go:334] "Generic (PLEG): container finished" podID="faa0722c-1acf-445e-8785-a8030be562b6" containerID="7bc1916ba8445fbf907a7dea5a22d296102cd3eb102c61aa6eae4f3b1ec28d5b" exitCode=0 Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.594277 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" event={"ID":"faa0722c-1acf-445e-8785-a8030be562b6","Type":"ContainerDied","Data":"7bc1916ba8445fbf907a7dea5a22d296102cd3eb102c61aa6eae4f3b1ec28d5b"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.597983 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" podStartSLOduration=131.597958614 podStartE2EDuration="2m11.597958614s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:28.590976821 +0000 UTC m=+151.593434754" watchObservedRunningTime="2026-01-23 18:03:28.597958614 +0000 UTC m=+151.600416567" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.599100 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:28 crc kubenswrapper[4760]: E0123 18:03:28.600272 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.100260087 +0000 UTC m=+152.102718020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.620114 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" event={"ID":"4648a70d-5af3-459c-815f-d12089d27b88","Type":"ContainerStarted","Data":"6b406f00f204468bcdfb4de81b917b4cfab57a1500530198e873a353987d9cd0"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.622979 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-h2x2h" podStartSLOduration=132.622967706 podStartE2EDuration="2m12.622967706s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:28.621698712 +0000 UTC m=+151.624156665" watchObservedRunningTime="2026-01-23 18:03:28.622967706 +0000 UTC m=+151.625425639" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.651657 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" event={"ID":"8a30bb8e-9df6-4c48-8532-ad9280521fb3","Type":"ContainerStarted","Data":"c8bd834e0500f8ddd5b5e4040eab6f161dae2530e37884bff9e6a92fcc52f894"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.710723 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:28 crc kubenswrapper[4760]: E0123 18:03:28.729926 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.229907507 +0000 UTC m=+152.232365450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.733634 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pmhlc" podStartSLOduration=131.733619661 podStartE2EDuration="2m11.733619661s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:28.659882148 +0000 UTC m=+151.662340081" watchObservedRunningTime="2026-01-23 18:03:28.733619661 +0000 UTC m=+151.736077584" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.818254 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:28 crc kubenswrapper[4760]: E0123 18:03:28.818660 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.318646555 +0000 UTC m=+152.321104488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.818861 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" podStartSLOduration=131.81884837 podStartE2EDuration="2m11.81884837s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:28.81809828 +0000 UTC m=+151.820556213" watchObservedRunningTime="2026-01-23 18:03:28.81884837 +0000 UTC m=+151.821306303" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.819939 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" podStartSLOduration=131.81993245 podStartE2EDuration="2m11.81993245s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:28.732841769 +0000 UTC m=+151.735299702" watchObservedRunningTime="2026-01-23 18:03:28.81993245 +0000 UTC m=+151.822390383" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.833060 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" event={"ID":"c5ec118f-1c98-4ea4-870a-7ced4b2303e5","Type":"ContainerStarted","Data":"beb3bc6160e6a91ac80612d9a44c5cc137dcae92f8f6a1b64905ddb13599b082"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.834702 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" event={"ID":"7c78fffb-6965-4ef7-b534-d713a7dd3318","Type":"ContainerStarted","Data":"52d2f6e54a23e7edfc534399c529ba76355a68544addf566a61385e3570c72b9"} Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.855924 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l7btq" podStartSLOduration=132.855903666 podStartE2EDuration="2m12.855903666s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:28.845524278 +0000 UTC m=+151.847982221" watchObservedRunningTime="2026-01-23 18:03:28.855903666 +0000 UTC m=+151.858361599" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.899886 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.943050 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.943101 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.944354 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:28 crc kubenswrapper[4760]: E0123 18:03:28.945630 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.445614241 +0000 UTC m=+152.448072174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:28 crc kubenswrapper[4760]: I0123 18:03:28.974034 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" event={"ID":"729e3c99-f28e-446a-af8d-d6f6225fadba","Type":"ContainerStarted","Data":"0e880ae588f64c41e88b6958da540730493511d5a4388b1c63b595cf8435cad7"} Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.014662 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" event={"ID":"2955aa37-6387-4353-ba15-ee92c902d318","Type":"ContainerStarted","Data":"a6740baacfc740c81ae17e88d0c11230384ed6709f81d95a9bf445b0e2861f7f"} Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.015767 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.046438 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.047905 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.547883113 +0000 UTC m=+152.550341046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.054812 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.055107 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.084350 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" event={"ID":"e359c6a3-6164-43d1-823e-829223bb3605","Type":"ContainerStarted","Data":"86e1ce3c68d5947d033e0030cb78fc7546fb1967163397799ad6b13120f8e7f8"} Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.146604 4760 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5b48b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.146683 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" podUID="2955aa37-6387-4353-ba15-ee92c902d318" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.149072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.150443 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.650422422 +0000 UTC m=+152.652880355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.225143 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" podStartSLOduration=132.225125441 podStartE2EDuration="2m12.225125441s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:29.224432022 +0000 UTC m=+152.226889955" watchObservedRunningTime="2026-01-23 18:03:29.225125441 +0000 UTC m=+152.227583374" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.250493 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.256564 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.75653319 +0000 UTC m=+152.758991123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.257341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.257790 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.757775155 +0000 UTC m=+152.760233088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.358714 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.358968 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.858953667 +0000 UTC m=+152.861411600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.396577 4760 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qq44z container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.396621 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" podUID="189fd44f-cc62-480c-9c49-377810883c89" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.471727 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.472218 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:29.972204902 +0000 UTC m=+152.974662835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.528426 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" podStartSLOduration=132.528392748 podStartE2EDuration="2m12.528392748s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:29.527969216 +0000 UTC m=+152.530427149" watchObservedRunningTime="2026-01-23 18:03:29.528392748 +0000 UTC m=+152.530850681" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.530006 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ffvsn" podStartSLOduration=132.529999933 podStartE2EDuration="2m12.529999933s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:29.31324532 +0000 UTC m=+152.315703243" watchObservedRunningTime="2026-01-23 18:03:29.529999933 +0000 UTC m=+152.532457866" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.580243 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.580857 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:30.08083657 +0000 UTC m=+153.083294513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.681826 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.682192 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:30.182175086 +0000 UTC m=+153.184633019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.782664 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.782811 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:30.282782592 +0000 UTC m=+153.285240535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.783283 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.783661 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:30.283645526 +0000 UTC m=+153.286103459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.784932 4760 csr.go:261] certificate signing request csr-mpzvd is approved, waiting to be issued Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.838906 4760 csr.go:257] certificate signing request csr-mpzvd is issued Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.841089 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qd549" podStartSLOduration=132.841071826 podStartE2EDuration="2m12.841071826s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:29.840202283 +0000 UTC m=+152.842660216" watchObservedRunningTime="2026-01-23 18:03:29.841071826 +0000 UTC m=+152.843529759" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.881246 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:29 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:29 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:29 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.881305 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.885280 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.885639 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:30.38560893 +0000 UTC m=+153.388066863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.978613 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hsfsw" podStartSLOduration=132.978594474 podStartE2EDuration="2m12.978594474s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:29.946952219 +0000 UTC m=+152.949410152" watchObservedRunningTime="2026-01-23 18:03:29.978594474 +0000 UTC m=+152.981052417" Jan 23 18:03:29 crc kubenswrapper[4760]: I0123 18:03:29.986468 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:29 crc kubenswrapper[4760]: E0123 18:03:29.986851 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:30.486837253 +0000 UTC m=+153.489295186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.029123 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsqmf" podStartSLOduration=133.029103863 podStartE2EDuration="2m13.029103863s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.028744573 +0000 UTC m=+153.031202506" watchObservedRunningTime="2026-01-23 18:03:30.029103863 +0000 UTC m=+153.031561806" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.086812 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:30 crc kubenswrapper[4760]: E0123 18:03:30.087020 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:30.587005337 +0000 UTC m=+153.589463260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.095903 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" podStartSLOduration=133.095889382 podStartE2EDuration="2m13.095889382s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.094889664 +0000 UTC m=+153.097347597" watchObservedRunningTime="2026-01-23 18:03:30.095889382 +0000 UTC m=+153.098347315" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.104508 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j" event={"ID":"7f3d065c-0f4b-418d-8f05-a147000a9113","Type":"ContainerStarted","Data":"b49fb3086b3914e3760f6f7ccc3afda3fddb8c9c2cc8e9e1da3d853f2b1443f5"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.106290 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-526r6" event={"ID":"b570e412-35e6-4feb-a5b4-1a198b486a39","Type":"ContainerStarted","Data":"18926a120599b4d39ae1dc0e6f9907ffe8506e68a24276e22b8c4d77ab6051da"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.106312 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-526r6" event={"ID":"b570e412-35e6-4feb-a5b4-1a198b486a39","Type":"ContainerStarted","Data":"c222a2f341e94f489da98b1ba4754ea6e44616fb04669745ecb1483df16fa2c3"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.123864 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5v4zz" podStartSLOduration=133.123849957 podStartE2EDuration="2m13.123849957s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.122940491 +0000 UTC m=+153.125398424" watchObservedRunningTime="2026-01-23 18:03:30.123849957 +0000 UTC m=+153.126307890" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.142129 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" event={"ID":"678b518e-f9ae-4fd0-bc08-b0489bf0aa07","Type":"ContainerStarted","Data":"6fcdc0604563825e92996e9654ae5c18aaef4145cb9368b4fc33a8c2f1be3ef6"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.159996 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kt5pm" event={"ID":"4fc7dcb9-4d58-4912-b845-82e987928932","Type":"ContainerStarted","Data":"9cf9ec86be330dbdefd2cc3de2aa5e5e44f1f4608c8f547f19d58b8274e80a58"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.195301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:30 crc kubenswrapper[4760]: E0123 18:03:30.195596 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:30.695584552 +0000 UTC m=+153.698042485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.200090 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" event={"ID":"7c78fffb-6965-4ef7-b534-d713a7dd3318","Type":"ContainerStarted","Data":"fd1ef03d1862267e58439620937a5d87917cd55e776010ea0bbaa325a863b1a8"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.203148 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-526r6" podStartSLOduration=9.203129322 podStartE2EDuration="9.203129322s" podCreationTimestamp="2026-01-23 18:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.200719535 +0000 UTC m=+153.203177468" watchObservedRunningTime="2026-01-23 18:03:30.203129322 +0000 UTC m=+153.205587255" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.211746 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" event={"ID":"37543438-dc29-44d6-a46e-8864aa3fcad4","Type":"ContainerStarted","Data":"67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.211788 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" event={"ID":"37543438-dc29-44d6-a46e-8864aa3fcad4","Type":"ContainerStarted","Data":"a2075edd79e9404dae38dd8ce8720afcd639b1591b56d9ebbe94bea8b21100c6"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.212582 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.253809 4760 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s4mrj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.253866 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.290969 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" event={"ID":"8a30bb8e-9df6-4c48-8532-ad9280521fb3","Type":"ContainerStarted","Data":"9419b58442f849a010588618f83039d2952fde020ef2715679f033aebe1535a2"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.291231 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.297649 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:30 crc kubenswrapper[4760]: E0123 18:03:30.298501 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:30.798482112 +0000 UTC m=+153.800940055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.325884 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" event={"ID":"f86c7176-88c1-4cd6-ab81-af7df8e9923f","Type":"ContainerStarted","Data":"ff23faad94d42e6c69efbe02dbe9f944e7a7a0a2ee7086fd7bf0703775d96468"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.339871 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbz2j" podStartSLOduration=133.339853708 podStartE2EDuration="2m13.339853708s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.2539863 +0000 UTC m=+153.256444233" watchObservedRunningTime="2026-01-23 18:03:30.339853708 +0000 UTC m=+153.342311641" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.339959 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" podStartSLOduration=133.339953821 podStartE2EDuration="2m13.339953821s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.337343458 +0000 UTC m=+153.339801391" watchObservedRunningTime="2026-01-23 18:03:30.339953821 +0000 UTC m=+153.342411754" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.366212 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" event={"ID":"c08fd2c2-1700-4296-a369-62c3c9928a63","Type":"ContainerStarted","Data":"27cf4113612f9ed4215e69f912b90c2feeed841f040507244b09c443447ac2bd"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.385185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" event={"ID":"f08dd246-4139-4911-a4f8-4a5e693f12de","Type":"ContainerStarted","Data":"ea2e937828285e53b73a50a5b0a99d64fb7e1bfc332ec2024b0be7bb2f806958"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.386094 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-lqr88" podStartSLOduration=134.386076057 podStartE2EDuration="2m14.386076057s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.382353355 +0000 UTC m=+153.384811288" watchObservedRunningTime="2026-01-23 18:03:30.386076057 +0000 UTC m=+153.388533990" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.399289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:30 crc kubenswrapper[4760]: E0123 18:03:30.400651 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:30.900635232 +0000 UTC m=+153.903093165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.414657 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" event={"ID":"37b6e215-7d18-4578-8e22-90068a8dabf6","Type":"ContainerStarted","Data":"81b8032aeeeee9a158089ce674bd89d97d1eeeb63f8b9cdadcbd3e8091313dc7"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.416173 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kt5pm" podStartSLOduration=9.416162721 podStartE2EDuration="9.416162721s" podCreationTimestamp="2026-01-23 18:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.415710739 +0000 UTC m=+153.418168672" watchObservedRunningTime="2026-01-23 18:03:30.416162721 +0000 UTC m=+153.418620654" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.418164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" event={"ID":"4648a70d-5af3-459c-815f-d12089d27b88","Type":"ContainerStarted","Data":"2e795985d43bbe387b9ab07c8bd326a77d0f6950653ea0ccf0d5445744a6a066"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.420690 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.486123 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" podStartSLOduration=134.486100108 podStartE2EDuration="2m14.486100108s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.470506446 +0000 UTC m=+153.472964379" watchObservedRunningTime="2026-01-23 18:03:30.486100108 +0000 UTC m=+153.488558091" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.488959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" event={"ID":"411040ce-40d3-4347-89d0-84f1a51bc0e7","Type":"ContainerStarted","Data":"515def317fb5dd144d9b98d4363f936ba312959396e76e6a82adb71c875fd350"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.502368 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:30 crc kubenswrapper[4760]: E0123 18:03:30.502838 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.002813071 +0000 UTC m=+154.005271004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.518152 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-qkh24" podStartSLOduration=133.518136875 podStartE2EDuration="2m13.518136875s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.517155137 +0000 UTC m=+153.519613070" watchObservedRunningTime="2026-01-23 18:03:30.518136875 +0000 UTC m=+153.520594808" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.542844 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" event={"ID":"a9e97eeb-2c62-421e-b81a-875190083260","Type":"ContainerStarted","Data":"cc23548e0da8f36f2d22a57dc7d8e01d6cb7cde3cf3e3ad526d08054034829bb"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.543467 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.554170 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-sd99g" podStartSLOduration=133.554144292 podStartE2EDuration="2m13.554144292s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.540881315 +0000 UTC m=+153.543339248" watchObservedRunningTime="2026-01-23 18:03:30.554144292 +0000 UTC m=+153.556602225" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.586310 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" event={"ID":"b9612c6f-0d03-4f5c-99d9-1e6681cea174","Type":"ContainerStarted","Data":"47af88e13d0bfe0c33989d03d1ed0e3142dc4648223398caed806f3352613db8"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.598468 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qj7w6" event={"ID":"6f32934d-e69b-4b98-b6db-447125e38ae0","Type":"ContainerStarted","Data":"4fa57c17094014c797d4858c56879b982e134445077d57697b84313de3c56b5b"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.604195 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:30 crc kubenswrapper[4760]: E0123 18:03:30.606720 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.106704537 +0000 UTC m=+154.109162470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.629840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" event={"ID":"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd","Type":"ContainerStarted","Data":"8457e7b311ba6c5381f6b473249687bf09715b08e186dac4e32f3cce21746f22"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.629885 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" event={"ID":"8f888aa2-f299-458e-bf2a-6f8aaadc6ebd","Type":"ContainerStarted","Data":"c0e59edb7658158b3cd4d3ba61fc7cbdeb2c86c912e9ae5c4899d50d5b3aee6c"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.647111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" event={"ID":"096ce378-9dd6-4c27-a30b-1c619d486ccb","Type":"ContainerStarted","Data":"bc346eeb864f75840b269de0297194c565f34cd8a2b9cc68fe12aa856d9b348e"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.692933 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6srzg" event={"ID":"f6df9414-4e8b-4593-8401-1ba799201143","Type":"ContainerStarted","Data":"e4ab8e9a476b2a00fcf459bbb3a78da1c808d1c7817e3ee4ab360e1ba1e06dde"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.693706 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.693845 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" podStartSLOduration=134.693797689 podStartE2EDuration="2m14.693797689s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.630537228 +0000 UTC m=+153.632995161" watchObservedRunningTime="2026-01-23 18:03:30.693797689 +0000 UTC m=+153.696255622" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.695333 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ffzzt" podStartSLOduration=133.695325911 podStartE2EDuration="2m13.695325911s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.68555304 +0000 UTC m=+153.688010983" watchObservedRunningTime="2026-01-23 18:03:30.695325911 +0000 UTC m=+153.697783844" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.705004 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:30 crc kubenswrapper[4760]: E0123 18:03:30.705804 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.205788411 +0000 UTC m=+154.208246344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.708027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" event={"ID":"aff72bd3-e963-4c1b-8187-dbff951b1f8d","Type":"ContainerStarted","Data":"23bf769bba395edadd20f40b2413e91d0b4e54dfad61e24276ab090bd4610530"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.724712 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdqvd" podStartSLOduration=133.724697034 podStartE2EDuration="2m13.724697034s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.72416914 +0000 UTC m=+153.726627093" watchObservedRunningTime="2026-01-23 18:03:30.724697034 +0000 UTC m=+153.727154967" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.755672 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" event={"ID":"d5068e6c-7f43-4692-b8df-968ca0907a62","Type":"ContainerStarted","Data":"fba470fbe4fcd5ea1b6a8d5f041ecaf87dd209473d6f487005f4d5ba3955687e"} Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.756596 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-49s9b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.756649 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" podUID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.758672 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-h45nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.758706 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h45nc" podUID="61a396e8-372f-4982-8994-d60baa42da95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.796773 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" podStartSLOduration=133.79675677 podStartE2EDuration="2m13.79675677s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.750375545 +0000 UTC m=+153.752833478" watchObservedRunningTime="2026-01-23 18:03:30.79675677 +0000 UTC m=+153.799214703" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.798023 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mdqhm" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.799601 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-chh57" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.804634 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qq44z" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.811007 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:30 crc kubenswrapper[4760]: E0123 18:03:30.813324 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.313309158 +0000 UTC m=+154.315767091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.840513 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 17:58:29 +0000 UTC, rotation deadline is 2026-12-11 10:07:52.746384514 +0000 UTC Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.840560 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7720h4m21.905827621s for next certificate rotation Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.846699 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5b48b" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.872757 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:30 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:30 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:30 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.872999 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.875187 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-krv88" podStartSLOduration=133.875177431 podStartE2EDuration="2m13.875177431s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.810341926 +0000 UTC m=+153.812799859" watchObservedRunningTime="2026-01-23 18:03:30.875177431 +0000 UTC m=+153.877635354" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.913140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:30 crc kubenswrapper[4760]: E0123 18:03:30.914186 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.414170931 +0000 UTC m=+154.416628864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.966219 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-6srzg" podStartSLOduration=8.966199352 podStartE2EDuration="8.966199352s" podCreationTimestamp="2026-01-23 18:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.960968678 +0000 UTC m=+153.963426621" watchObservedRunningTime="2026-01-23 18:03:30.966199352 +0000 UTC m=+153.968657295" Jan 23 18:03:30 crc kubenswrapper[4760]: I0123 18:03:30.967751 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" podStartSLOduration=134.967738985 podStartE2EDuration="2m14.967738985s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:30.892234634 +0000 UTC m=+153.894692577" watchObservedRunningTime="2026-01-23 18:03:30.967738985 +0000 UTC m=+153.970196928" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.030687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.031108 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.53108749 +0000 UTC m=+154.533545473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.048654 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sqwjm" podStartSLOduration=134.048634045 podStartE2EDuration="2m14.048634045s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:31.046089914 +0000 UTC m=+154.048547857" watchObservedRunningTime="2026-01-23 18:03:31.048634045 +0000 UTC m=+154.051091978" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.096491 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" podStartSLOduration=134.09647205 podStartE2EDuration="2m14.09647205s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:31.095508583 +0000 UTC m=+154.097966516" watchObservedRunningTime="2026-01-23 18:03:31.09647205 +0000 UTC m=+154.098929983" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.131752 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.131916 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.631891891 +0000 UTC m=+154.634349824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.132142 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.132641 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.632622931 +0000 UTC m=+154.635080864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.223819 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jq7dg" podStartSLOduration=135.223780395 podStartE2EDuration="2m15.223780395s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:31.213678755 +0000 UTC m=+154.216136708" watchObservedRunningTime="2026-01-23 18:03:31.223780395 +0000 UTC m=+154.226238328" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.235747 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.235917 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.735888461 +0000 UTC m=+154.738346394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.236039 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.236360 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.736346303 +0000 UTC m=+154.738804236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.337053 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.337291 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.837261288 +0000 UTC m=+154.839719221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.438430 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.438941 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:31.938919592 +0000 UTC m=+154.941377535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.539463 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.539628 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.03960294 +0000 UTC m=+155.042060873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.539686 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.539993 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.039978931 +0000 UTC m=+155.042436924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.641307 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.141293046 +0000 UTC m=+155.143750979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.641249 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.641719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.642148 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.14212842 +0000 UTC m=+155.144586363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.735674 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x8kgm"] Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.736840 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" podStartSLOduration=134.736821762 podStartE2EDuration="2m14.736821762s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:31.696600678 +0000 UTC m=+154.699058611" watchObservedRunningTime="2026-01-23 18:03:31.736821762 +0000 UTC m=+154.739279695" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.738030 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:31 crc kubenswrapper[4760]: W0123 18:03:31.743301 4760 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.743358 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.744140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.744692 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.244668799 +0000 UTC m=+155.247126732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.822374 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8kgm"] Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.839320 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" event={"ID":"faa0722c-1acf-445e-8785-a8030be562b6","Type":"ContainerStarted","Data":"4b41ad3df5cf1f31f1525463e5bc1c349ff2044e881e71874aab164ff790e249"} Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.839372 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" event={"ID":"faa0722c-1acf-445e-8785-a8030be562b6","Type":"ContainerStarted","Data":"81eb46056da1cc60d38f152b3bc0d34a58e2bc4e94c771cf535f60f63a103907"} Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.840298 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fhzrk"] Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.845531 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-utilities\") pod \"community-operators-x8kgm\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.845595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-catalog-content\") pod \"community-operators-x8kgm\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.845642 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.845684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxfzs\" (UniqueName: \"kubernetes.io/projected/550d2598-58ad-4e85-acd9-0bd0c945703e-kube-api-access-rxfzs\") pod \"community-operators-x8kgm\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.846053 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.346034826 +0000 UTC m=+155.348492829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.860122 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.872752 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:31 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:31 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:31 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.872825 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.876778 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.877505 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fhzrk"] Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.889598 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6srzg" event={"ID":"f6df9414-4e8b-4593-8401-1ba799201143","Type":"ContainerStarted","Data":"1ecdf0913d3b93bf7187714d4ed19d367b0c7d4748e9264f979128bdc7eb7f1a"} Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.950323 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.950548 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxfzs\" (UniqueName: \"kubernetes.io/projected/550d2598-58ad-4e85-acd9-0bd0c945703e-kube-api-access-rxfzs\") pod \"community-operators-x8kgm\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.950567 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-utilities\") pod \"community-operators-x8kgm\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.950602 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-utilities\") pod \"certified-operators-fhzrk\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.950629 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-catalog-content\") pod \"certified-operators-fhzrk\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.950661 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qh2l\" (UniqueName: \"kubernetes.io/projected/036e8482-197b-4a5f-b33a-792ca966a04b-kube-api-access-8qh2l\") pod \"certified-operators-fhzrk\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.950685 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-catalog-content\") pod \"community-operators-x8kgm\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.951354 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-catalog-content\") pod \"community-operators-x8kgm\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:31 crc kubenswrapper[4760]: E0123 18:03:31.951729 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.451712282 +0000 UTC m=+155.454170215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.953629 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-879gc" event={"ID":"7c78fffb-6965-4ef7-b534-d713a7dd3318","Type":"ContainerStarted","Data":"1bbcae65c7cc6d6369276f0e25e123ef097b66103dc1fc573824fa2a1aa37519"} Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.953826 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-utilities\") pod \"community-operators-x8kgm\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.977495 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xfxx4"] Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.978522 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:31 crc kubenswrapper[4760]: I0123 18:03:31.989867 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" podStartSLOduration=135.989850329 podStartE2EDuration="2m15.989850329s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:31.971944702 +0000 UTC m=+154.974402645" watchObservedRunningTime="2026-01-23 18:03:31.989850329 +0000 UTC m=+154.992308252" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.001145 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wl57f" event={"ID":"d5068e6c-7f43-4692-b8df-968ca0907a62","Type":"ContainerStarted","Data":"1fc119430017709154da3a70f874f94a4d06daa07f50a0b199c57ceabeea9753"} Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.003642 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfxx4"] Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.027874 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxfzs\" (UniqueName: \"kubernetes.io/projected/550d2598-58ad-4e85-acd9-0bd0c945703e-kube-api-access-rxfzs\") pod \"community-operators-x8kgm\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.045135 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" event={"ID":"096ce378-9dd6-4c27-a30b-1c619d486ccb","Type":"ContainerStarted","Data":"8dd57b891fd0ee83e514e252e5c3298159736b99dfa30d622300bc3daa0b6963"} Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.045173 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" event={"ID":"096ce378-9dd6-4c27-a30b-1c619d486ccb","Type":"ContainerStarted","Data":"2471ac7d2223d952df398e2ad2512f4af0696336f1c35588b6c875613a0a290d"} Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.054200 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.054266 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-utilities\") pod \"certified-operators-fhzrk\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.054288 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-catalog-content\") pod \"certified-operators-fhzrk\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.054313 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qh2l\" (UniqueName: \"kubernetes.io/projected/036e8482-197b-4a5f-b33a-792ca966a04b-kube-api-access-8qh2l\") pod \"certified-operators-fhzrk\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:32 crc kubenswrapper[4760]: E0123 18:03:32.055830 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.555818835 +0000 UTC m=+155.558276768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.056253 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-utilities\") pod \"certified-operators-fhzrk\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.056487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-catalog-content\") pod \"certified-operators-fhzrk\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.058930 4760 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s4mrj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.058999 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.108717 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.111123 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qh2l\" (UniqueName: \"kubernetes.io/projected/036e8482-197b-4a5f-b33a-792ca966a04b-kube-api-access-8qh2l\") pod \"certified-operators-fhzrk\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.128428 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2jppg"] Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.129463 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.144258 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2jppg"] Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.162626 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.162779 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-utilities\") pod \"community-operators-xfxx4\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.162877 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-utilities\") pod \"certified-operators-2jppg\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.163005 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-catalog-content\") pod \"certified-operators-2jppg\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.163264 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22bg\" (UniqueName: \"kubernetes.io/projected/3500fe91-533f-4f4d-85c0-071bf05d5916-kube-api-access-f22bg\") pod \"community-operators-xfxx4\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.163445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldvwq\" (UniqueName: \"kubernetes.io/projected/c7132701-1753-45e3-abf6-09a7f589dddd-kube-api-access-ldvwq\") pod \"certified-operators-2jppg\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.163555 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-catalog-content\") pod \"community-operators-xfxx4\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:32 crc kubenswrapper[4760]: E0123 18:03:32.163643 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.663630451 +0000 UTC m=+155.666088384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.226312 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.265166 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.265512 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22bg\" (UniqueName: \"kubernetes.io/projected/3500fe91-533f-4f4d-85c0-071bf05d5916-kube-api-access-f22bg\") pod \"community-operators-xfxx4\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.265553 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldvwq\" (UniqueName: \"kubernetes.io/projected/c7132701-1753-45e3-abf6-09a7f589dddd-kube-api-access-ldvwq\") pod \"certified-operators-2jppg\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.265588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-catalog-content\") pod \"community-operators-xfxx4\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.265608 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-utilities\") pod \"community-operators-xfxx4\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.265632 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-utilities\") pod \"certified-operators-2jppg\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.265657 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-catalog-content\") pod \"certified-operators-2jppg\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.266063 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-catalog-content\") pod \"certified-operators-2jppg\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: E0123 18:03:32.266349 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.766333095 +0000 UTC m=+155.768791098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.266368 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-catalog-content\") pod \"community-operators-xfxx4\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.266642 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-utilities\") pod \"community-operators-xfxx4\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.266836 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-utilities\") pod \"certified-operators-2jppg\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.297584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22bg\" (UniqueName: \"kubernetes.io/projected/3500fe91-533f-4f4d-85c0-071bf05d5916-kube-api-access-f22bg\") pod \"community-operators-xfxx4\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.318531 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldvwq\" (UniqueName: \"kubernetes.io/projected/c7132701-1753-45e3-abf6-09a7f589dddd-kube-api-access-ldvwq\") pod \"certified-operators-2jppg\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.355324 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8t4w" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.368167 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:32 crc kubenswrapper[4760]: E0123 18:03:32.369091 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.869064339 +0000 UTC m=+155.871522272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.473202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:32 crc kubenswrapper[4760]: E0123 18:03:32.473510 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:32.973498562 +0000 UTC m=+155.975956485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.478666 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.574353 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:32 crc kubenswrapper[4760]: E0123 18:03:32.575188 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.075167807 +0000 UTC m=+156.077625740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.685443 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:32 crc kubenswrapper[4760]: E0123 18:03:32.685779 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.185767509 +0000 UTC m=+156.188225442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.786887 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:32 crc kubenswrapper[4760]: E0123 18:03:32.787207 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.287193627 +0000 UTC m=+156.289651560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.886341 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.886583 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.886724 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.887871 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:32 crc kubenswrapper[4760]: E0123 18:03:32.888233 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.388221946 +0000 UTC m=+156.390679879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.918696 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:32 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:32 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:32 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.918741 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:32 crc kubenswrapper[4760]: I0123 18:03:32.990271 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:32 crc kubenswrapper[4760]: E0123 18:03:32.990789 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.490775075 +0000 UTC m=+156.493232998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.095421 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.095926 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.595909726 +0000 UTC m=+156.598367659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.104175 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fhzrk"] Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.105503 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" event={"ID":"096ce378-9dd6-4c27-a30b-1c619d486ccb","Type":"ContainerStarted","Data":"a9b7022f25858380db27414fe2c0829c6c541ec63b5ae7cad0ea75ceb9f90286"} Jan 23 18:03:33 crc kubenswrapper[4760]: W0123 18:03:33.129899 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod036e8482_197b_4a5f_b33a_792ca966a04b.slice/crio-2770086b2cc20b6b1ad8fa4d087def395e147ba060f7bf33322aff5ee4b9927c WatchSource:0}: Error finding container 2770086b2cc20b6b1ad8fa4d087def395e147ba060f7bf33322aff5ee4b9927c: Status 404 returned error can't find the container with id 2770086b2cc20b6b1ad8fa4d087def395e147ba060f7bf33322aff5ee4b9927c Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.196898 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.197034 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.697006996 +0000 UTC m=+156.699464929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.197329 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.200187 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.700177204 +0000 UTC m=+156.702635127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.229344 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2jppg"] Jan 23 18:03:33 crc kubenswrapper[4760]: W0123 18:03:33.241511 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7132701_1753_45e3_abf6_09a7f589dddd.slice/crio-c52971881c94c6b5cb45d229301124568ccfee5e8b9d4574e07c700e7f33fbf0 WatchSource:0}: Error finding container c52971881c94c6b5cb45d229301124568ccfee5e8b9d4574e07c700e7f33fbf0: Status 404 returned error can't find the container with id c52971881c94c6b5cb45d229301124568ccfee5e8b9d4574e07c700e7f33fbf0 Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.298982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.299194 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.799161464 +0000 UTC m=+156.801619407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.299454 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.299733 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.799719369 +0000 UTC m=+156.802177302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.323664 4760 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.404840 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.404942 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.904926733 +0000 UTC m=+156.907384666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.405140 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.405501 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:33.905486248 +0000 UTC m=+156.907944181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.471236 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.473078 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8kgm"] Jan 23 18:03:33 crc kubenswrapper[4760]: W0123 18:03:33.486719 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod550d2598_58ad_4e85_acd9_0bd0c945703e.slice/crio-45509015f3d486513e7f3c15e76b512613259c0d8819343cae6ee8b0fe55498f WatchSource:0}: Error finding container 45509015f3d486513e7f3c15e76b512613259c0d8819343cae6ee8b0fe55498f: Status 404 returned error can't find the container with id 45509015f3d486513e7f3c15e76b512613259c0d8819343cae6ee8b0fe55498f Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.506720 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.507135 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:34.007109982 +0000 UTC m=+157.009567915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.507363 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfxx4"] Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.609483 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.609789 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:34.109777526 +0000 UTC m=+157.112235459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.710453 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.710800 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:34.210783482 +0000 UTC m=+157.213241415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.710864 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.711144 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:34.211135982 +0000 UTC m=+157.213593915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.740533 4760 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T18:03:33.323691064Z","Handler":null,"Name":""} Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.812459 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.812735 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:03:34.312704265 +0000 UTC m=+157.315162208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.812819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:33 crc kubenswrapper[4760]: E0123 18:03:33.813175 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:03:34.313164748 +0000 UTC m=+157.315622741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9d9kf" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.852185 4760 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.852228 4760 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.869843 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:33 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:33 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:33 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.870197 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.913813 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.936112 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4td8g"] Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.947375 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.948977 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.950350 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.951621 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.958048 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.958341 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.958626 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.962465 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4td8g"] Jan 23 18:03:33 crc kubenswrapper[4760]: I0123 18:03:33.965870 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.014797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.050288 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.050560 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.074633 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-h45nc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.074688 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h45nc" podUID="61a396e8-372f-4982-8994-d60baa42da95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.074698 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-h45nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.074736 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h45nc" podUID="61a396e8-372f-4982-8994-d60baa42da95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.084341 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9d9kf\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.107309 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.115638 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-catalog-content\") pod \"redhat-marketplace-4td8g\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.115689 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-utilities\") pod \"redhat-marketplace-4td8g\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.115806 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a7b61ef0-c7dc-402e-87b9-9017ed37699c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.115858 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a7b61ef0-c7dc-402e-87b9-9017ed37699c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.115897 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9z86\" (UniqueName: \"kubernetes.io/projected/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-kube-api-access-s9z86\") pod \"redhat-marketplace-4td8g\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.117255 4760 generic.go:334] "Generic (PLEG): container finished" podID="036e8482-197b-4a5f-b33a-792ca966a04b" containerID="5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab" exitCode=0 Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.117452 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhzrk" event={"ID":"036e8482-197b-4a5f-b33a-792ca966a04b","Type":"ContainerDied","Data":"5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab"} Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.117553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhzrk" event={"ID":"036e8482-197b-4a5f-b33a-792ca966a04b","Type":"ContainerStarted","Data":"2770086b2cc20b6b1ad8fa4d087def395e147ba060f7bf33322aff5ee4b9927c"} Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.118670 4760 generic.go:334] "Generic (PLEG): container finished" podID="c7132701-1753-45e3-abf6-09a7f589dddd" containerID="51a7755a7b38e5e9bd37bc8a6cf0898623ac2ae3a7c187c0827d24a076a4abeb" exitCode=0 Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.118715 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jppg" event={"ID":"c7132701-1753-45e3-abf6-09a7f589dddd","Type":"ContainerDied","Data":"51a7755a7b38e5e9bd37bc8a6cf0898623ac2ae3a7c187c0827d24a076a4abeb"} Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.118730 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jppg" event={"ID":"c7132701-1753-45e3-abf6-09a7f589dddd","Type":"ContainerStarted","Data":"c52971881c94c6b5cb45d229301124568ccfee5e8b9d4574e07c700e7f33fbf0"} Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.121173 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" event={"ID":"096ce378-9dd6-4c27-a30b-1c619d486ccb","Type":"ContainerStarted","Data":"a5de8ab8374ae1fa7f80d6c306ca31a047c2ad991d7f3ed12172337bfc640764"} Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.121929 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.124678 4760 generic.go:334] "Generic (PLEG): container finished" podID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerID="15d7ac63c600b1f0b3dd6446cb2f2d519ce69ff181ae7e3364be2013dcdf0957" exitCode=0 Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.124717 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8kgm" event={"ID":"550d2598-58ad-4e85-acd9-0bd0c945703e","Type":"ContainerDied","Data":"15d7ac63c600b1f0b3dd6446cb2f2d519ce69ff181ae7e3364be2013dcdf0957"} Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.124772 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8kgm" event={"ID":"550d2598-58ad-4e85-acd9-0bd0c945703e","Type":"ContainerStarted","Data":"45509015f3d486513e7f3c15e76b512613259c0d8819343cae6ee8b0fe55498f"} Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.127829 4760 generic.go:334] "Generic (PLEG): container finished" podID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerID="786410984e0211995f2c64183fb4fdb1e59f3a1958eefb739bbd6bf0a590a8d0" exitCode=0 Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.127955 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfxx4" event={"ID":"3500fe91-533f-4f4d-85c0-071bf05d5916","Type":"ContainerDied","Data":"786410984e0211995f2c64183fb4fdb1e59f3a1958eefb739bbd6bf0a590a8d0"} Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.128516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfxx4" event={"ID":"3500fe91-533f-4f4d-85c0-071bf05d5916","Type":"ContainerStarted","Data":"2b1d52811e2f5a8faef349f1a255c60684322b2e4fe9bf4b5697b0a854d0571f"} Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.189616 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.189999 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.194537 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-b9mm2" podStartSLOduration=12.194520358 podStartE2EDuration="12.194520358s" podCreationTimestamp="2026-01-23 18:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:34.19207032 +0000 UTC m=+157.194528263" watchObservedRunningTime="2026-01-23 18:03:34.194520358 +0000 UTC m=+157.196978291" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.217175 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-catalog-content\") pod \"redhat-marketplace-4td8g\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.217232 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-utilities\") pod \"redhat-marketplace-4td8g\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.217291 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a7b61ef0-c7dc-402e-87b9-9017ed37699c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.217331 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a7b61ef0-c7dc-402e-87b9-9017ed37699c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.217359 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9z86\" (UniqueName: \"kubernetes.io/projected/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-kube-api-access-s9z86\") pod \"redhat-marketplace-4td8g\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.218054 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-catalog-content\") pod \"redhat-marketplace-4td8g\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.218269 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-utilities\") pod \"redhat-marketplace-4td8g\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.218457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a7b61ef0-c7dc-402e-87b9-9017ed37699c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.254663 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a7b61ef0-c7dc-402e-87b9-9017ed37699c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.254760 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9z86\" (UniqueName: \"kubernetes.io/projected/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-kube-api-access-s9z86\") pod \"redhat-marketplace-4td8g\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.272491 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.275038 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.331785 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b685f"] Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.333909 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.346202 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b685f"] Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.522829 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-catalog-content\") pod \"redhat-marketplace-b685f\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.522881 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-utilities\") pod \"redhat-marketplace-b685f\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.522910 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mtfp\" (UniqueName: \"kubernetes.io/projected/d0ba54fc-0961-4e37-a830-fd9b7582b49b-kube-api-access-9mtfp\") pod \"redhat-marketplace-b685f\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.524025 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9d9kf"] Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.626784 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-catalog-content\") pod \"redhat-marketplace-b685f\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.626836 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-utilities\") pod \"redhat-marketplace-b685f\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.626865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mtfp\" (UniqueName: \"kubernetes.io/projected/d0ba54fc-0961-4e37-a830-fd9b7582b49b-kube-api-access-9mtfp\") pod \"redhat-marketplace-b685f\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.627325 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-catalog-content\") pod \"redhat-marketplace-b685f\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.627592 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-utilities\") pod \"redhat-marketplace-b685f\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.650988 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mtfp\" (UniqueName: \"kubernetes.io/projected/d0ba54fc-0961-4e37-a830-fd9b7582b49b-kube-api-access-9mtfp\") pod \"redhat-marketplace-b685f\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.671581 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4td8g"] Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.674478 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.719998 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m4kdb"] Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.721661 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.729847 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.738759 4760 patch_prober.go:28] interesting pod/apiserver-76f77b778f-l7lg7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]log ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]etcd ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]poststarthook/max-in-flight-filter ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 18:03:34 crc kubenswrapper[4760]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 18:03:34 crc kubenswrapper[4760]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 23 18:03:34 crc kubenswrapper[4760]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-startinformers ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 18:03:34 crc kubenswrapper[4760]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 18:03:34 crc kubenswrapper[4760]: livez check failed Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.738831 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" podUID="faa0722c-1acf-445e-8785-a8030be562b6" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.768589 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m4kdb"] Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.825852 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.831315 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-utilities\") pod \"redhat-operators-m4kdb\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.831371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qskt\" (UniqueName: \"kubernetes.io/projected/dc9cf1ca-5861-414c-a7ab-22380486fd2b-kube-api-access-2qskt\") pod \"redhat-operators-m4kdb\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.831450 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-catalog-content\") pod \"redhat-operators-m4kdb\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:34 crc kubenswrapper[4760]: W0123 18:03:34.839723 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda7b61ef0_c7dc_402e_87b9_9017ed37699c.slice/crio-e1e76dde8e40ab39b6417b7d0c4fdf27279315ff67d46258855cb4e0e9af6730 WatchSource:0}: Error finding container e1e76dde8e40ab39b6417b7d0c4fdf27279315ff67d46258855cb4e0e9af6730: Status 404 returned error can't find the container with id e1e76dde8e40ab39b6417b7d0c4fdf27279315ff67d46258855cb4e0e9af6730 Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.855521 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.859694 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:34 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:34 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:34 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.859738 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.867732 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.867776 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.873943 4760 patch_prober.go:28] interesting pod/console-f9d7485db-h2x2h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.874502 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-h2x2h" podUID="d3f94f74-4a2c-419a-b73f-c654dbf783b5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.935976 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-catalog-content\") pod \"redhat-operators-m4kdb\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.936141 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-utilities\") pod \"redhat-operators-m4kdb\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.936225 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qskt\" (UniqueName: \"kubernetes.io/projected/dc9cf1ca-5861-414c-a7ab-22380486fd2b-kube-api-access-2qskt\") pod \"redhat-operators-m4kdb\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.937824 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-catalog-content\") pod \"redhat-operators-m4kdb\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.938613 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-utilities\") pod \"redhat-operators-m4kdb\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.947103 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j9ndf"] Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.952058 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.982501 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9ndf"] Jan 23 18:03:34 crc kubenswrapper[4760]: I0123 18:03:34.988287 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qskt\" (UniqueName: \"kubernetes.io/projected/dc9cf1ca-5861-414c-a7ab-22380486fd2b-kube-api-access-2qskt\") pod \"redhat-operators-m4kdb\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.039435 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-utilities\") pod \"redhat-operators-j9ndf\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.039506 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5xn9\" (UniqueName: \"kubernetes.io/projected/cf43a009-6480-45b5-ad78-64e9f442686a-kube-api-access-h5xn9\") pod \"redhat-operators-j9ndf\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.039580 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-catalog-content\") pod \"redhat-operators-j9ndf\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.140124 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" event={"ID":"c1747fa8-d09e-4415-ab0e-e607a674dfbb","Type":"ContainerStarted","Data":"8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1"} Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.141486 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.141504 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" event={"ID":"c1747fa8-d09e-4415-ab0e-e607a674dfbb","Type":"ContainerStarted","Data":"0b175298e35a76269616152438b777c8a236af3144f9b5831214065a5247d8c5"} Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.140560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-utilities\") pod \"redhat-operators-j9ndf\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.141079 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-utilities\") pod \"redhat-operators-j9ndf\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.141592 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5xn9\" (UniqueName: \"kubernetes.io/projected/cf43a009-6480-45b5-ad78-64e9f442686a-kube-api-access-h5xn9\") pod \"redhat-operators-j9ndf\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.141708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-catalog-content\") pod \"redhat-operators-j9ndf\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.142071 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-catalog-content\") pod \"redhat-operators-j9ndf\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.145335 4760 generic.go:334] "Generic (PLEG): container finished" podID="aff72bd3-e963-4c1b-8187-dbff951b1f8d" containerID="23bf769bba395edadd20f40b2413e91d0b4e54dfad61e24276ab090bd4610530" exitCode=0 Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.145419 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" event={"ID":"aff72bd3-e963-4c1b-8187-dbff951b1f8d","Type":"ContainerDied","Data":"23bf769bba395edadd20f40b2413e91d0b4e54dfad61e24276ab090bd4610530"} Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.146597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a7b61ef0-c7dc-402e-87b9-9017ed37699c","Type":"ContainerStarted","Data":"e1e76dde8e40ab39b6417b7d0c4fdf27279315ff67d46258855cb4e0e9af6730"} Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.148589 4760 generic.go:334] "Generic (PLEG): container finished" podID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerID="eab1ec1f41fa88034240ce55df1314c14b4e3abff2781f00b75972e146448e4c" exitCode=0 Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.149627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4td8g" event={"ID":"861bf30c-c95a-42cf-9ced-f5cfbb2265c5","Type":"ContainerDied","Data":"eab1ec1f41fa88034240ce55df1314c14b4e3abff2781f00b75972e146448e4c"} Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.149650 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4td8g" event={"ID":"861bf30c-c95a-42cf-9ced-f5cfbb2265c5","Type":"ContainerStarted","Data":"5951f6baf424c7ac7933115cbed6fbae0438a0aa7856a56dc94a4a38f2f98de5"} Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.154857 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b685f"] Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.160547 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" podStartSLOduration=138.160530798 podStartE2EDuration="2m18.160530798s" podCreationTimestamp="2026-01-23 18:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:35.156852295 +0000 UTC m=+158.159310228" watchObservedRunningTime="2026-01-23 18:03:35.160530798 +0000 UTC m=+158.162988731" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.166187 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5xn9\" (UniqueName: \"kubernetes.io/projected/cf43a009-6480-45b5-ad78-64e9f442686a-kube-api-access-h5xn9\") pod \"redhat-operators-j9ndf\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:35 crc kubenswrapper[4760]: W0123 18:03:35.187678 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0ba54fc_0961_4e37_a830_fd9b7582b49b.slice/crio-2f6122fcbef1b9edbaada39cf1dd7984dc397fa00d4891e53b373fc5cb03bbe6 WatchSource:0}: Error finding container 2f6122fcbef1b9edbaada39cf1dd7984dc397fa00d4891e53b373fc5cb03bbe6: Status 404 returned error can't find the container with id 2f6122fcbef1b9edbaada39cf1dd7984dc397fa00d4891e53b373fc5cb03bbe6 Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.212508 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.303522 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.626445 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.720062 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m4kdb"] Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.771613 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9ndf"] Jan 23 18:03:35 crc kubenswrapper[4760]: W0123 18:03:35.797005 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf43a009_6480_45b5_ad78_64e9f442686a.slice/crio-cf18c8e35bfeaf4ec11cabfb85d064601dae8f6a258a3054eb0332916ab1150c WatchSource:0}: Error finding container cf18c8e35bfeaf4ec11cabfb85d064601dae8f6a258a3054eb0332916ab1150c: Status 404 returned error can't find the container with id cf18c8e35bfeaf4ec11cabfb85d064601dae8f6a258a3054eb0332916ab1150c Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.860938 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:35 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:35 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:35 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:35 crc kubenswrapper[4760]: I0123 18:03:35.861003 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.169593 4760 generic.go:334] "Generic (PLEG): container finished" podID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerID="b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac" exitCode=0 Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.169648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b685f" event={"ID":"d0ba54fc-0961-4e37-a830-fd9b7582b49b","Type":"ContainerDied","Data":"b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac"} Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.169676 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b685f" event={"ID":"d0ba54fc-0961-4e37-a830-fd9b7582b49b","Type":"ContainerStarted","Data":"2f6122fcbef1b9edbaada39cf1dd7984dc397fa00d4891e53b373fc5cb03bbe6"} Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.182018 4760 generic.go:334] "Generic (PLEG): container finished" podID="a7b61ef0-c7dc-402e-87b9-9017ed37699c" containerID="831819d415ab49c9fb5dd63fc7c65f0a6858e6bed50b46c3d291b95793a55496" exitCode=0 Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.183216 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a7b61ef0-c7dc-402e-87b9-9017ed37699c","Type":"ContainerDied","Data":"831819d415ab49c9fb5dd63fc7c65f0a6858e6bed50b46c3d291b95793a55496"} Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.185278 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4kdb" event={"ID":"dc9cf1ca-5861-414c-a7ab-22380486fd2b","Type":"ContainerStarted","Data":"85ce4aab9491e7a5cfdec3a88eab3a2239730d37dcc11c77d6181e177e8c7370"} Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.203604 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9ndf" event={"ID":"cf43a009-6480-45b5-ad78-64e9f442686a","Type":"ContainerStarted","Data":"fd750587932728c710a393a67a643e365e182c7f39aba766778ce31071ad081d"} Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.203664 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9ndf" event={"ID":"cf43a009-6480-45b5-ad78-64e9f442686a","Type":"ContainerStarted","Data":"cf18c8e35bfeaf4ec11cabfb85d064601dae8f6a258a3054eb0332916ab1150c"} Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.549377 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.735032 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff72bd3-e963-4c1b-8187-dbff951b1f8d-config-volume\") pod \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.735134 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aff72bd3-e963-4c1b-8187-dbff951b1f8d-secret-volume\") pod \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.735297 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf8fb\" (UniqueName: \"kubernetes.io/projected/aff72bd3-e963-4c1b-8187-dbff951b1f8d-kube-api-access-kf8fb\") pod \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\" (UID: \"aff72bd3-e963-4c1b-8187-dbff951b1f8d\") " Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.735852 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff72bd3-e963-4c1b-8187-dbff951b1f8d-config-volume" (OuterVolumeSpecName: "config-volume") pod "aff72bd3-e963-4c1b-8187-dbff951b1f8d" (UID: "aff72bd3-e963-4c1b-8187-dbff951b1f8d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.736732 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff72bd3-e963-4c1b-8187-dbff951b1f8d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.743501 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff72bd3-e963-4c1b-8187-dbff951b1f8d-kube-api-access-kf8fb" (OuterVolumeSpecName: "kube-api-access-kf8fb") pod "aff72bd3-e963-4c1b-8187-dbff951b1f8d" (UID: "aff72bd3-e963-4c1b-8187-dbff951b1f8d"). InnerVolumeSpecName "kube-api-access-kf8fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.743696 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff72bd3-e963-4c1b-8187-dbff951b1f8d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aff72bd3-e963-4c1b-8187-dbff951b1f8d" (UID: "aff72bd3-e963-4c1b-8187-dbff951b1f8d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.838007 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aff72bd3-e963-4c1b-8187-dbff951b1f8d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.838038 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf8fb\" (UniqueName: \"kubernetes.io/projected/aff72bd3-e963-4c1b-8187-dbff951b1f8d-kube-api-access-kf8fb\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.859811 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:36 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:36 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:36 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:36 crc kubenswrapper[4760]: I0123 18:03:36.859886 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.221342 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" event={"ID":"aff72bd3-e963-4c1b-8187-dbff951b1f8d","Type":"ContainerDied","Data":"5c546b89232fc55884160eb691f45fe240c18e4e1d2f28eb9b3335cca90525e6"} Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.221400 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c546b89232fc55884160eb691f45fe240c18e4e1d2f28eb9b3335cca90525e6" Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.221401 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg" Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.230077 4760 generic.go:334] "Generic (PLEG): container finished" podID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerID="23c2ddeaae5c38d409d34365a2eaa0392b9f3d0ff69e5941c707b887d65fa71d" exitCode=0 Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.230225 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4kdb" event={"ID":"dc9cf1ca-5861-414c-a7ab-22380486fd2b","Type":"ContainerDied","Data":"23c2ddeaae5c38d409d34365a2eaa0392b9f3d0ff69e5941c707b887d65fa71d"} Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.243032 4760 generic.go:334] "Generic (PLEG): container finished" podID="cf43a009-6480-45b5-ad78-64e9f442686a" containerID="fd750587932728c710a393a67a643e365e182c7f39aba766778ce31071ad081d" exitCode=0 Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.243244 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9ndf" event={"ID":"cf43a009-6480-45b5-ad78-64e9f442686a","Type":"ContainerDied","Data":"fd750587932728c710a393a67a643e365e182c7f39aba766778ce31071ad081d"} Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.701759 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.849714 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kubelet-dir\") pod \"a7b61ef0-c7dc-402e-87b9-9017ed37699c\" (UID: \"a7b61ef0-c7dc-402e-87b9-9017ed37699c\") " Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.849857 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a7b61ef0-c7dc-402e-87b9-9017ed37699c" (UID: "a7b61ef0-c7dc-402e-87b9-9017ed37699c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.849911 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kube-api-access\") pod \"a7b61ef0-c7dc-402e-87b9-9017ed37699c\" (UID: \"a7b61ef0-c7dc-402e-87b9-9017ed37699c\") " Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.850128 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.857951 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:37 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:37 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:37 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.858066 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.878378 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a7b61ef0-c7dc-402e-87b9-9017ed37699c" (UID: "a7b61ef0-c7dc-402e-87b9-9017ed37699c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:03:37 crc kubenswrapper[4760]: I0123 18:03:37.951701 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7b61ef0-c7dc-402e-87b9-9017ed37699c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.189591 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 18:03:38 crc kubenswrapper[4760]: E0123 18:03:38.190091 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff72bd3-e963-4c1b-8187-dbff951b1f8d" containerName="collect-profiles" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.190111 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff72bd3-e963-4c1b-8187-dbff951b1f8d" containerName="collect-profiles" Jan 23 18:03:38 crc kubenswrapper[4760]: E0123 18:03:38.190123 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b61ef0-c7dc-402e-87b9-9017ed37699c" containerName="pruner" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.190131 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b61ef0-c7dc-402e-87b9-9017ed37699c" containerName="pruner" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.190232 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b61ef0-c7dc-402e-87b9-9017ed37699c" containerName="pruner" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.190246 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff72bd3-e963-4c1b-8187-dbff951b1f8d" containerName="collect-profiles" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.190611 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.196616 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.196872 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.198944 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.261314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a7b61ef0-c7dc-402e-87b9-9017ed37699c","Type":"ContainerDied","Data":"e1e76dde8e40ab39b6417b7d0c4fdf27279315ff67d46258855cb4e0e9af6730"} Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.261355 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1e76dde8e40ab39b6417b7d0c4fdf27279315ff67d46258855cb4e0e9af6730" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.261372 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.357165 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.357307 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.458012 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.458083 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.458183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.475624 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.617786 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.857733 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:38 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:38 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:38 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.858036 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.863345 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:38 crc kubenswrapper[4760]: I0123 18:03:38.869469 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/009bf3d0-1239-4b72-8a29-8b5e5964bdac-metrics-certs\") pod \"network-metrics-daemon-sw8p8\" (UID: \"009bf3d0-1239-4b72-8a29-8b5e5964bdac\") " pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:39 crc kubenswrapper[4760]: I0123 18:03:39.137178 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw8p8" Jan 23 18:03:39 crc kubenswrapper[4760]: I0123 18:03:39.199386 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:39 crc kubenswrapper[4760]: I0123 18:03:39.210241 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-l7lg7" Jan 23 18:03:39 crc kubenswrapper[4760]: I0123 18:03:39.231786 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 18:03:39 crc kubenswrapper[4760]: I0123 18:03:39.298279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29","Type":"ContainerStarted","Data":"b7bfc043546ad99d5043d86a3d8f7d8a67e3c99036bba916d9bf68e8d7716bc8"} Jan 23 18:03:39 crc kubenswrapper[4760]: I0123 18:03:39.571010 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sw8p8"] Jan 23 18:03:39 crc kubenswrapper[4760]: I0123 18:03:39.858679 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:39 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:39 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:39 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:39 crc kubenswrapper[4760]: I0123 18:03:39.858955 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:40 crc kubenswrapper[4760]: I0123 18:03:40.253546 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-6srzg" Jan 23 18:03:40 crc kubenswrapper[4760]: I0123 18:03:40.326678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29","Type":"ContainerStarted","Data":"4b8da2d7d187008c0f270131a632971a72e162a997b2f981354c4222112efd93"} Jan 23 18:03:40 crc kubenswrapper[4760]: I0123 18:03:40.330214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" event={"ID":"009bf3d0-1239-4b72-8a29-8b5e5964bdac","Type":"ContainerStarted","Data":"692985692f5aa43414df470db6b73b667d5cdd61029c5efa6b835e50afe65df3"} Jan 23 18:03:40 crc kubenswrapper[4760]: I0123 18:03:40.343948 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.343930251 podStartE2EDuration="2.343930251s" podCreationTimestamp="2026-01-23 18:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:40.342242864 +0000 UTC m=+163.344700807" watchObservedRunningTime="2026-01-23 18:03:40.343930251 +0000 UTC m=+163.346388174" Jan 23 18:03:40 crc kubenswrapper[4760]: I0123 18:03:40.858668 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:40 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:40 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:40 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:40 crc kubenswrapper[4760]: I0123 18:03:40.858923 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:41 crc kubenswrapper[4760]: I0123 18:03:41.340772 4760 generic.go:334] "Generic (PLEG): container finished" podID="f416cc4d-fc1a-4df1-bd92-9ce6f9590c29" containerID="4b8da2d7d187008c0f270131a632971a72e162a997b2f981354c4222112efd93" exitCode=0 Jan 23 18:03:41 crc kubenswrapper[4760]: I0123 18:03:41.340851 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29","Type":"ContainerDied","Data":"4b8da2d7d187008c0f270131a632971a72e162a997b2f981354c4222112efd93"} Jan 23 18:03:41 crc kubenswrapper[4760]: I0123 18:03:41.343969 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" event={"ID":"009bf3d0-1239-4b72-8a29-8b5e5964bdac","Type":"ContainerStarted","Data":"622e2aaad195bb69ea0d3689a5e1ba0331ac4f68b990ab4b2ad4d435ccc77134"} Jan 23 18:03:41 crc kubenswrapper[4760]: I0123 18:03:41.857636 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:41 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:41 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:41 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:41 crc kubenswrapper[4760]: I0123 18:03:41.857711 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:42 crc kubenswrapper[4760]: I0123 18:03:42.857788 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:42 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:42 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:42 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:42 crc kubenswrapper[4760]: I0123 18:03:42.858052 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:43 crc kubenswrapper[4760]: I0123 18:03:43.858720 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:43 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:43 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:43 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:43 crc kubenswrapper[4760]: I0123 18:03:43.858793 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:44 crc kubenswrapper[4760]: I0123 18:03:44.088111 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-h45nc" Jan 23 18:03:44 crc kubenswrapper[4760]: I0123 18:03:44.857572 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:44 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:44 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:44 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:44 crc kubenswrapper[4760]: I0123 18:03:44.857653 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:44 crc kubenswrapper[4760]: I0123 18:03:44.868488 4760 patch_prober.go:28] interesting pod/console-f9d7485db-h2x2h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 23 18:03:44 crc kubenswrapper[4760]: I0123 18:03:44.868553 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-h2x2h" podUID="d3f94f74-4a2c-419a-b73f-c654dbf783b5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 23 18:03:45 crc kubenswrapper[4760]: I0123 18:03:45.857524 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:45 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:45 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:45 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:45 crc kubenswrapper[4760]: I0123 18:03:45.857811 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:46 crc kubenswrapper[4760]: I0123 18:03:46.075581 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:03:46 crc kubenswrapper[4760]: I0123 18:03:46.075646 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:03:46 crc kubenswrapper[4760]: I0123 18:03:46.857354 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:46 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:46 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:46 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:46 crc kubenswrapper[4760]: I0123 18:03:46.857441 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.088306 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.207996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kube-api-access\") pod \"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29\" (UID: \"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29\") " Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.208073 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kubelet-dir\") pod \"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29\" (UID: \"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29\") " Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.208256 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f416cc4d-fc1a-4df1-bd92-9ce6f9590c29" (UID: "f416cc4d-fc1a-4df1-bd92-9ce6f9590c29"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.208341 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.218289 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f416cc4d-fc1a-4df1-bd92-9ce6f9590c29" (UID: "f416cc4d-fc1a-4df1-bd92-9ce6f9590c29"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.311693 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f416cc4d-fc1a-4df1-bd92-9ce6f9590c29-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.385175 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.385093 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f416cc4d-fc1a-4df1-bd92-9ce6f9590c29","Type":"ContainerDied","Data":"b7bfc043546ad99d5043d86a3d8f7d8a67e3c99036bba916d9bf68e8d7716bc8"} Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.386422 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7bfc043546ad99d5043d86a3d8f7d8a67e3c99036bba916d9bf68e8d7716bc8" Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.859464 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:47 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 23 18:03:47 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:47 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:47 crc kubenswrapper[4760]: I0123 18:03:47.859538 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:48 crc kubenswrapper[4760]: I0123 18:03:48.858391 4760 patch_prober.go:28] interesting pod/router-default-5444994796-qd549 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:03:48 crc kubenswrapper[4760]: [+]has-synced ok Jan 23 18:03:48 crc kubenswrapper[4760]: [+]process-running ok Jan 23 18:03:48 crc kubenswrapper[4760]: healthz check failed Jan 23 18:03:48 crc kubenswrapper[4760]: I0123 18:03:48.859535 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qd549" podUID="987bbc21-84aa-4e45-bb94-b0639da3c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:03:49 crc kubenswrapper[4760]: I0123 18:03:49.858216 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:49 crc kubenswrapper[4760]: I0123 18:03:49.860247 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qd549" Jan 23 18:03:50 crc kubenswrapper[4760]: I0123 18:03:50.403690 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sw8p8" event={"ID":"009bf3d0-1239-4b72-8a29-8b5e5964bdac","Type":"ContainerStarted","Data":"1767ee7b289efa70f9334e8d95515d23f749e61045b8281f2c4ab9888fcb8e85"} Jan 23 18:03:50 crc kubenswrapper[4760]: I0123 18:03:50.406690 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-w9575_4648a70d-5af3-459c-815f-d12089d27b88/cluster-samples-operator/0.log" Jan 23 18:03:50 crc kubenswrapper[4760]: I0123 18:03:50.406744 4760 generic.go:334] "Generic (PLEG): container finished" podID="4648a70d-5af3-459c-815f-d12089d27b88" containerID="6b406f00f204468bcdfb4de81b917b4cfab57a1500530198e873a353987d9cd0" exitCode=2 Jan 23 18:03:50 crc kubenswrapper[4760]: I0123 18:03:50.406852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" event={"ID":"4648a70d-5af3-459c-815f-d12089d27b88","Type":"ContainerDied","Data":"6b406f00f204468bcdfb4de81b917b4cfab57a1500530198e873a353987d9cd0"} Jan 23 18:03:50 crc kubenswrapper[4760]: I0123 18:03:50.407357 4760 scope.go:117] "RemoveContainer" containerID="6b406f00f204468bcdfb4de81b917b4cfab57a1500530198e873a353987d9cd0" Jan 23 18:03:50 crc kubenswrapper[4760]: I0123 18:03:50.446044 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sw8p8" podStartSLOduration=154.446014317 podStartE2EDuration="2m34.446014317s" podCreationTimestamp="2026-01-23 18:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:03:50.441533023 +0000 UTC m=+173.443990976" watchObservedRunningTime="2026-01-23 18:03:50.446014317 +0000 UTC m=+173.448472250" Jan 23 18:03:53 crc kubenswrapper[4760]: I0123 18:03:53.536819 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:03:54 crc kubenswrapper[4760]: I0123 18:03:54.113448 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:03:54 crc kubenswrapper[4760]: I0123 18:03:54.945350 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:03:54 crc kubenswrapper[4760]: I0123 18:03:54.951775 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:04:04 crc kubenswrapper[4760]: I0123 18:04:04.399017 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jbmsp" Jan 23 18:04:08 crc kubenswrapper[4760]: E0123 18:04:08.897449 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 18:04:08 crc kubenswrapper[4760]: E0123 18:04:08.897958 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qskt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-m4kdb_openshift-marketplace(dc9cf1ca-5861-414c-a7ab-22380486fd2b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:04:08 crc kubenswrapper[4760]: E0123 18:04:08.899216 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-m4kdb" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" Jan 23 18:04:09 crc kubenswrapper[4760]: E0123 18:04:09.935174 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 18:04:09 crc kubenswrapper[4760]: E0123 18:04:09.935312 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5xn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j9ndf_openshift-marketplace(cf43a009-6480-45b5-ad78-64e9f442686a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:04:09 crc kubenswrapper[4760]: E0123 18:04:09.936573 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j9ndf" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" Jan 23 18:04:09 crc kubenswrapper[4760]: E0123 18:04:09.959671 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-m4kdb" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" Jan 23 18:04:10 crc kubenswrapper[4760]: E0123 18:04:10.251102 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 18:04:10 crc kubenswrapper[4760]: E0123 18:04:10.251316 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxfzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x8kgm_openshift-marketplace(550d2598-58ad-4e85-acd9-0bd0c945703e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:04:10 crc kubenswrapper[4760]: E0123 18:04:10.252537 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x8kgm" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" Jan 23 18:04:13 crc kubenswrapper[4760]: E0123 18:04:13.687116 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j9ndf" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" Jan 23 18:04:13 crc kubenswrapper[4760]: E0123 18:04:13.687193 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x8kgm" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" Jan 23 18:04:13 crc kubenswrapper[4760]: E0123 18:04:13.835877 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 18:04:13 crc kubenswrapper[4760]: E0123 18:04:13.836033 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mtfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-b685f_openshift-marketplace(d0ba54fc-0961-4e37-a830-fd9b7582b49b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:04:13 crc kubenswrapper[4760]: E0123 18:04:13.837571 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-b685f" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" Jan 23 18:04:13 crc kubenswrapper[4760]: I0123 18:04:13.988703 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 18:04:13 crc kubenswrapper[4760]: E0123 18:04:13.989168 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f416cc4d-fc1a-4df1-bd92-9ce6f9590c29" containerName="pruner" Jan 23 18:04:13 crc kubenswrapper[4760]: I0123 18:04:13.989179 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f416cc4d-fc1a-4df1-bd92-9ce6f9590c29" containerName="pruner" Jan 23 18:04:13 crc kubenswrapper[4760]: I0123 18:04:13.989311 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f416cc4d-fc1a-4df1-bd92-9ce6f9590c29" containerName="pruner" Jan 23 18:04:13 crc kubenswrapper[4760]: I0123 18:04:13.989783 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:04:13 crc kubenswrapper[4760]: I0123 18:04:13.996145 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 18:04:13 crc kubenswrapper[4760]: I0123 18:04:13.996495 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 18:04:14 crc kubenswrapper[4760]: I0123 18:04:14.001427 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 18:04:14 crc kubenswrapper[4760]: I0123 18:04:14.017297 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18168e5a-87e7-4075-b14a-41eb8db1fde2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"18168e5a-87e7-4075-b14a-41eb8db1fde2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:04:14 crc kubenswrapper[4760]: I0123 18:04:14.017587 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18168e5a-87e7-4075-b14a-41eb8db1fde2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"18168e5a-87e7-4075-b14a-41eb8db1fde2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:04:14 crc kubenswrapper[4760]: E0123 18:04:14.056889 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 18:04:14 crc kubenswrapper[4760]: E0123 18:04:14.057036 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f22bg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xfxx4_openshift-marketplace(3500fe91-533f-4f4d-85c0-071bf05d5916): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:04:14 crc kubenswrapper[4760]: E0123 18:04:14.058776 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xfxx4" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" Jan 23 18:04:14 crc kubenswrapper[4760]: I0123 18:04:14.127244 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18168e5a-87e7-4075-b14a-41eb8db1fde2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"18168e5a-87e7-4075-b14a-41eb8db1fde2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:04:14 crc kubenswrapper[4760]: I0123 18:04:14.127351 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18168e5a-87e7-4075-b14a-41eb8db1fde2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"18168e5a-87e7-4075-b14a-41eb8db1fde2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:04:14 crc kubenswrapper[4760]: I0123 18:04:14.127364 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18168e5a-87e7-4075-b14a-41eb8db1fde2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"18168e5a-87e7-4075-b14a-41eb8db1fde2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:04:14 crc kubenswrapper[4760]: I0123 18:04:14.147866 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18168e5a-87e7-4075-b14a-41eb8db1fde2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"18168e5a-87e7-4075-b14a-41eb8db1fde2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:04:14 crc kubenswrapper[4760]: I0123 18:04:14.322304 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.202945 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xfxx4" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.202945 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-b685f" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.308614 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.309015 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qh2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fhzrk_openshift-marketplace(036e8482-197b-4a5f-b33a-792ca966a04b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.310490 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fhzrk" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.315289 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.315462 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9z86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4td8g_openshift-marketplace(861bf30c-c95a-42cf-9ced-f5cfbb2265c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.316542 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4td8g" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.331672 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.331831 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldvwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2jppg_openshift-marketplace(c7132701-1753-45e3-abf6-09a7f589dddd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.333032 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2jppg" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" Jan 23 18:04:15 crc kubenswrapper[4760]: I0123 18:04:15.539265 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-w9575_4648a70d-5af3-459c-815f-d12089d27b88/cluster-samples-operator/0.log" Jan 23 18:04:15 crc kubenswrapper[4760]: I0123 18:04:15.539391 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9575" event={"ID":"4648a70d-5af3-459c-815f-d12089d27b88","Type":"ContainerStarted","Data":"d6f506836ba0e93522248bf9948f3fe6fc7e4e681c6a2436d363432b93621ff5"} Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.541067 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jppg" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.541292 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fhzrk" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" Jan 23 18:04:15 crc kubenswrapper[4760]: E0123 18:04:15.541851 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4td8g" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" Jan 23 18:04:15 crc kubenswrapper[4760]: I0123 18:04:15.629446 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 18:04:15 crc kubenswrapper[4760]: W0123 18:04:15.641748 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod18168e5a_87e7_4075_b14a_41eb8db1fde2.slice/crio-acd7d50056b9e7db30c71612cdfe338951fbb9755ef818cdbf7231184008e799 WatchSource:0}: Error finding container acd7d50056b9e7db30c71612cdfe338951fbb9755ef818cdbf7231184008e799: Status 404 returned error can't find the container with id acd7d50056b9e7db30c71612cdfe338951fbb9755ef818cdbf7231184008e799 Jan 23 18:04:16 crc kubenswrapper[4760]: I0123 18:04:16.076159 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:04:16 crc kubenswrapper[4760]: I0123 18:04:16.076217 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:04:16 crc kubenswrapper[4760]: I0123 18:04:16.546100 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"18168e5a-87e7-4075-b14a-41eb8db1fde2","Type":"ContainerStarted","Data":"d0a16360f203420e8d21d5529d25a268cc7e43bc59d94ae51e279184ebb3c5f6"} Jan 23 18:04:16 crc kubenswrapper[4760]: I0123 18:04:16.546390 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"18168e5a-87e7-4075-b14a-41eb8db1fde2","Type":"ContainerStarted","Data":"acd7d50056b9e7db30c71612cdfe338951fbb9755ef818cdbf7231184008e799"} Jan 23 18:04:16 crc kubenswrapper[4760]: I0123 18:04:16.564227 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.560953849 podStartE2EDuration="3.560953849s" podCreationTimestamp="2026-01-23 18:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:04:16.560373827 +0000 UTC m=+199.562831770" watchObservedRunningTime="2026-01-23 18:04:16.560953849 +0000 UTC m=+199.563411782" Jan 23 18:04:17 crc kubenswrapper[4760]: I0123 18:04:17.551620 4760 generic.go:334] "Generic (PLEG): container finished" podID="18168e5a-87e7-4075-b14a-41eb8db1fde2" containerID="d0a16360f203420e8d21d5529d25a268cc7e43bc59d94ae51e279184ebb3c5f6" exitCode=0 Jan 23 18:04:17 crc kubenswrapper[4760]: I0123 18:04:17.551666 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"18168e5a-87e7-4075-b14a-41eb8db1fde2","Type":"ContainerDied","Data":"d0a16360f203420e8d21d5529d25a268cc7e43bc59d94ae51e279184ebb3c5f6"} Jan 23 18:04:18 crc kubenswrapper[4760]: I0123 18:04:18.823233 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:04:19 crc kubenswrapper[4760]: I0123 18:04:19.023935 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18168e5a-87e7-4075-b14a-41eb8db1fde2-kube-api-access\") pod \"18168e5a-87e7-4075-b14a-41eb8db1fde2\" (UID: \"18168e5a-87e7-4075-b14a-41eb8db1fde2\") " Jan 23 18:04:19 crc kubenswrapper[4760]: I0123 18:04:19.024040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18168e5a-87e7-4075-b14a-41eb8db1fde2-kubelet-dir\") pod \"18168e5a-87e7-4075-b14a-41eb8db1fde2\" (UID: \"18168e5a-87e7-4075-b14a-41eb8db1fde2\") " Jan 23 18:04:19 crc kubenswrapper[4760]: I0123 18:04:19.024183 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18168e5a-87e7-4075-b14a-41eb8db1fde2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "18168e5a-87e7-4075-b14a-41eb8db1fde2" (UID: "18168e5a-87e7-4075-b14a-41eb8db1fde2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:04:19 crc kubenswrapper[4760]: I0123 18:04:19.024438 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18168e5a-87e7-4075-b14a-41eb8db1fde2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:19 crc kubenswrapper[4760]: I0123 18:04:19.032922 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18168e5a-87e7-4075-b14a-41eb8db1fde2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "18168e5a-87e7-4075-b14a-41eb8db1fde2" (UID: "18168e5a-87e7-4075-b14a-41eb8db1fde2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:04:19 crc kubenswrapper[4760]: I0123 18:04:19.125209 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18168e5a-87e7-4075-b14a-41eb8db1fde2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:19 crc kubenswrapper[4760]: I0123 18:04:19.564711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"18168e5a-87e7-4075-b14a-41eb8db1fde2","Type":"ContainerDied","Data":"acd7d50056b9e7db30c71612cdfe338951fbb9755ef818cdbf7231184008e799"} Jan 23 18:04:19 crc kubenswrapper[4760]: I0123 18:04:19.564759 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acd7d50056b9e7db30c71612cdfe338951fbb9755ef818cdbf7231184008e799" Jan 23 18:04:19 crc kubenswrapper[4760]: I0123 18:04:19.565079 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:04:20 crc kubenswrapper[4760]: I0123 18:04:20.983947 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 18:04:20 crc kubenswrapper[4760]: E0123 18:04:20.984480 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18168e5a-87e7-4075-b14a-41eb8db1fde2" containerName="pruner" Jan 23 18:04:20 crc kubenswrapper[4760]: I0123 18:04:20.984495 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="18168e5a-87e7-4075-b14a-41eb8db1fde2" containerName="pruner" Jan 23 18:04:20 crc kubenswrapper[4760]: I0123 18:04:20.984629 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="18168e5a-87e7-4075-b14a-41eb8db1fde2" containerName="pruner" Jan 23 18:04:20 crc kubenswrapper[4760]: I0123 18:04:20.985107 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:20 crc kubenswrapper[4760]: I0123 18:04:20.987075 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 18:04:20 crc kubenswrapper[4760]: I0123 18:04:20.988431 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 18:04:20 crc kubenswrapper[4760]: I0123 18:04:20.994039 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.150641 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.150715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-var-lock\") pod \"installer-9-crc\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.150772 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kube-api-access\") pod \"installer-9-crc\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.251904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-var-lock\") pod \"installer-9-crc\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.252175 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kube-api-access\") pod \"installer-9-crc\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.252023 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-var-lock\") pod \"installer-9-crc\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.252240 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.252282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.269532 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kube-api-access\") pod \"installer-9-crc\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.300503 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:04:21 crc kubenswrapper[4760]: I0123 18:04:21.703461 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 18:04:22 crc kubenswrapper[4760]: I0123 18:04:22.585449 4760 generic.go:334] "Generic (PLEG): container finished" podID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerID="fc97dbc07c3c398803a64cb7ac437dba1407c4514fbcfb3850b8488958ef00b4" exitCode=0 Jan 23 18:04:22 crc kubenswrapper[4760]: I0123 18:04:22.585529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4kdb" event={"ID":"dc9cf1ca-5861-414c-a7ab-22380486fd2b","Type":"ContainerDied","Data":"fc97dbc07c3c398803a64cb7ac437dba1407c4514fbcfb3850b8488958ef00b4"} Jan 23 18:04:22 crc kubenswrapper[4760]: I0123 18:04:22.587947 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"690bc7ab-1ccc-49ce-a86f-18a94fa042c7","Type":"ContainerStarted","Data":"00bdb585ad3aee6ba3329813e3060a9c6aea40d04ddd5006aa9878e72793d0b8"} Jan 23 18:04:22 crc kubenswrapper[4760]: I0123 18:04:22.588036 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"690bc7ab-1ccc-49ce-a86f-18a94fa042c7","Type":"ContainerStarted","Data":"8cb5d798c909bce5ab9dab192e79f4b249327788a862d00f3f90dc10a8a93cf0"} Jan 23 18:04:22 crc kubenswrapper[4760]: I0123 18:04:22.616469 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.616450886 podStartE2EDuration="2.616450886s" podCreationTimestamp="2026-01-23 18:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:04:22.615232639 +0000 UTC m=+205.617690572" watchObservedRunningTime="2026-01-23 18:04:22.616450886 +0000 UTC m=+205.618908829" Jan 23 18:04:23 crc kubenswrapper[4760]: I0123 18:04:23.610293 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4kdb" event={"ID":"dc9cf1ca-5861-414c-a7ab-22380486fd2b","Type":"ContainerStarted","Data":"a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79"} Jan 23 18:04:25 crc kubenswrapper[4760]: I0123 18:04:25.213289 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:04:25 crc kubenswrapper[4760]: I0123 18:04:25.213336 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:04:26 crc kubenswrapper[4760]: I0123 18:04:26.275374 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m4kdb" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerName="registry-server" probeResult="failure" output=< Jan 23 18:04:26 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 23 18:04:26 crc kubenswrapper[4760]: > Jan 23 18:04:26 crc kubenswrapper[4760]: I0123 18:04:26.617657 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m4kdb" podStartSLOduration=6.843302073 podStartE2EDuration="52.617638336s" podCreationTimestamp="2026-01-23 18:03:34 +0000 UTC" firstStartedPulling="2026-01-23 18:03:37.234264801 +0000 UTC m=+160.236722734" lastFinishedPulling="2026-01-23 18:04:23.008601064 +0000 UTC m=+206.011058997" observedRunningTime="2026-01-23 18:04:23.630631901 +0000 UTC m=+206.633089834" watchObservedRunningTime="2026-01-23 18:04:26.617638336 +0000 UTC m=+209.620096269" Jan 23 18:04:28 crc kubenswrapper[4760]: I0123 18:04:28.638103 4760 generic.go:334] "Generic (PLEG): container finished" podID="036e8482-197b-4a5f-b33a-792ca966a04b" containerID="d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329" exitCode=0 Jan 23 18:04:28 crc kubenswrapper[4760]: I0123 18:04:28.638117 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhzrk" event={"ID":"036e8482-197b-4a5f-b33a-792ca966a04b","Type":"ContainerDied","Data":"d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329"} Jan 23 18:04:28 crc kubenswrapper[4760]: I0123 18:04:28.647355 4760 generic.go:334] "Generic (PLEG): container finished" podID="c7132701-1753-45e3-abf6-09a7f589dddd" containerID="7959a74dc3fb0e4f28a3b09803083c97a9bca080e2d124257f37a725294c2dfc" exitCode=0 Jan 23 18:04:28 crc kubenswrapper[4760]: I0123 18:04:28.647481 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jppg" event={"ID":"c7132701-1753-45e3-abf6-09a7f589dddd","Type":"ContainerDied","Data":"7959a74dc3fb0e4f28a3b09803083c97a9bca080e2d124257f37a725294c2dfc"} Jan 23 18:04:28 crc kubenswrapper[4760]: I0123 18:04:28.649915 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9ndf" event={"ID":"cf43a009-6480-45b5-ad78-64e9f442686a","Type":"ContainerStarted","Data":"48cbccd81fffd36c754c2bc5780e849c2cf54462ce443e46b8aafb29f1a31873"} Jan 23 18:04:28 crc kubenswrapper[4760]: I0123 18:04:28.656962 4760 generic.go:334] "Generic (PLEG): container finished" podID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerID="470265aa30e1084d941f0d3868a12f74b551b254e3c7a1cc0f267fb25abfe2cf" exitCode=0 Jan 23 18:04:28 crc kubenswrapper[4760]: I0123 18:04:28.657012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4td8g" event={"ID":"861bf30c-c95a-42cf-9ced-f5cfbb2265c5","Type":"ContainerDied","Data":"470265aa30e1084d941f0d3868a12f74b551b254e3c7a1cc0f267fb25abfe2cf"} Jan 23 18:04:29 crc kubenswrapper[4760]: I0123 18:04:29.663312 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4td8g" event={"ID":"861bf30c-c95a-42cf-9ced-f5cfbb2265c5","Type":"ContainerStarted","Data":"5dffd14f760d0967dc6f8886cdd0178b7c94e3ab67c09f543c762056e396e2bc"} Jan 23 18:04:29 crc kubenswrapper[4760]: I0123 18:04:29.665483 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhzrk" event={"ID":"036e8482-197b-4a5f-b33a-792ca966a04b","Type":"ContainerStarted","Data":"28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7"} Jan 23 18:04:29 crc kubenswrapper[4760]: I0123 18:04:29.668666 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jppg" event={"ID":"c7132701-1753-45e3-abf6-09a7f589dddd","Type":"ContainerStarted","Data":"0cd3b852b5ef244dfc1eb14ab66c124d6ea9c5361581c0562d7b24c359b1c3a5"} Jan 23 18:04:29 crc kubenswrapper[4760]: I0123 18:04:29.670690 4760 generic.go:334] "Generic (PLEG): container finished" podID="cf43a009-6480-45b5-ad78-64e9f442686a" containerID="48cbccd81fffd36c754c2bc5780e849c2cf54462ce443e46b8aafb29f1a31873" exitCode=0 Jan 23 18:04:29 crc kubenswrapper[4760]: I0123 18:04:29.670730 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9ndf" event={"ID":"cf43a009-6480-45b5-ad78-64e9f442686a","Type":"ContainerDied","Data":"48cbccd81fffd36c754c2bc5780e849c2cf54462ce443e46b8aafb29f1a31873"} Jan 23 18:04:29 crc kubenswrapper[4760]: I0123 18:04:29.672806 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8kgm" event={"ID":"550d2598-58ad-4e85-acd9-0bd0c945703e","Type":"ContainerStarted","Data":"b3ed716d8512f68a637f6e3c15dc6fbca65377bd0f8731ed6b73bf52e551f163"} Jan 23 18:04:29 crc kubenswrapper[4760]: I0123 18:04:29.707261 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4td8g" podStartSLOduration=2.784451799 podStartE2EDuration="56.707245871s" podCreationTimestamp="2026-01-23 18:03:33 +0000 UTC" firstStartedPulling="2026-01-23 18:03:35.154658325 +0000 UTC m=+158.157116258" lastFinishedPulling="2026-01-23 18:04:29.077452397 +0000 UTC m=+212.079910330" observedRunningTime="2026-01-23 18:04:29.686869954 +0000 UTC m=+212.689327887" watchObservedRunningTime="2026-01-23 18:04:29.707245871 +0000 UTC m=+212.709703804" Jan 23 18:04:29 crc kubenswrapper[4760]: I0123 18:04:29.725666 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fhzrk" podStartSLOduration=3.713687556 podStartE2EDuration="58.725650256s" podCreationTimestamp="2026-01-23 18:03:31 +0000 UTC" firstStartedPulling="2026-01-23 18:03:34.121717372 +0000 UTC m=+157.124175305" lastFinishedPulling="2026-01-23 18:04:29.133680072 +0000 UTC m=+212.136138005" observedRunningTime="2026-01-23 18:04:29.708400015 +0000 UTC m=+212.710857948" watchObservedRunningTime="2026-01-23 18:04:29.725650256 +0000 UTC m=+212.728108189" Jan 23 18:04:29 crc kubenswrapper[4760]: I0123 18:04:29.760446 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2jppg" podStartSLOduration=2.7968591910000002 podStartE2EDuration="57.760432101s" podCreationTimestamp="2026-01-23 18:03:32 +0000 UTC" firstStartedPulling="2026-01-23 18:03:34.122244216 +0000 UTC m=+157.124702149" lastFinishedPulling="2026-01-23 18:04:29.085817126 +0000 UTC m=+212.088275059" observedRunningTime="2026-01-23 18:04:29.759779997 +0000 UTC m=+212.762237920" watchObservedRunningTime="2026-01-23 18:04:29.760432101 +0000 UTC m=+212.762890034" Jan 23 18:04:30 crc kubenswrapper[4760]: I0123 18:04:30.681988 4760 generic.go:334] "Generic (PLEG): container finished" podID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerID="b3ed716d8512f68a637f6e3c15dc6fbca65377bd0f8731ed6b73bf52e551f163" exitCode=0 Jan 23 18:04:30 crc kubenswrapper[4760]: I0123 18:04:30.682073 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8kgm" event={"ID":"550d2598-58ad-4e85-acd9-0bd0c945703e","Type":"ContainerDied","Data":"b3ed716d8512f68a637f6e3c15dc6fbca65377bd0f8731ed6b73bf52e551f163"} Jan 23 18:04:30 crc kubenswrapper[4760]: I0123 18:04:30.684284 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b685f" event={"ID":"d0ba54fc-0961-4e37-a830-fd9b7582b49b","Type":"ContainerStarted","Data":"ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8"} Jan 23 18:04:31 crc kubenswrapper[4760]: I0123 18:04:31.691898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9ndf" event={"ID":"cf43a009-6480-45b5-ad78-64e9f442686a","Type":"ContainerStarted","Data":"e6ff88a2cb7e11dc11773c10ad0d8499ce68d969de93dbd89c187e64d0113637"} Jan 23 18:04:31 crc kubenswrapper[4760]: I0123 18:04:31.701778 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8kgm" event={"ID":"550d2598-58ad-4e85-acd9-0bd0c945703e","Type":"ContainerStarted","Data":"de215f33a438c51a3fff03023471da4b43ffc29ae47ac729a6fee59a76986164"} Jan 23 18:04:31 crc kubenswrapper[4760]: I0123 18:04:31.704252 4760 generic.go:334] "Generic (PLEG): container finished" podID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerID="ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8" exitCode=0 Jan 23 18:04:31 crc kubenswrapper[4760]: I0123 18:04:31.704298 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b685f" event={"ID":"d0ba54fc-0961-4e37-a830-fd9b7582b49b","Type":"ContainerDied","Data":"ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8"} Jan 23 18:04:31 crc kubenswrapper[4760]: I0123 18:04:31.720043 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j9ndf" podStartSLOduration=4.299680369 podStartE2EDuration="57.720025997s" podCreationTimestamp="2026-01-23 18:03:34 +0000 UTC" firstStartedPulling="2026-01-23 18:03:37.246265183 +0000 UTC m=+160.248723116" lastFinishedPulling="2026-01-23 18:04:30.666610821 +0000 UTC m=+213.669068744" observedRunningTime="2026-01-23 18:04:31.716005841 +0000 UTC m=+214.718463774" watchObservedRunningTime="2026-01-23 18:04:31.720025997 +0000 UTC m=+214.722483930" Jan 23 18:04:31 crc kubenswrapper[4760]: I0123 18:04:31.753654 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x8kgm" podStartSLOduration=3.665692333 podStartE2EDuration="1m0.753623138s" podCreationTimestamp="2026-01-23 18:03:31 +0000 UTC" firstStartedPulling="2026-01-23 18:03:34.12598798 +0000 UTC m=+157.128445913" lastFinishedPulling="2026-01-23 18:04:31.213918785 +0000 UTC m=+214.216376718" observedRunningTime="2026-01-23 18:04:31.753171757 +0000 UTC m=+214.755629690" watchObservedRunningTime="2026-01-23 18:04:31.753623138 +0000 UTC m=+214.756081071" Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.227459 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.227829 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.275832 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.479540 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.479587 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.522124 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.711035 4760 generic.go:334] "Generic (PLEG): container finished" podID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerID="2f4733136e18869c7a4ee1da0c7494e210b53580e37df3cd5730be4dd8c95739" exitCode=0 Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.711128 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfxx4" event={"ID":"3500fe91-533f-4f4d-85c0-071bf05d5916","Type":"ContainerDied","Data":"2f4733136e18869c7a4ee1da0c7494e210b53580e37df3cd5730be4dd8c95739"} Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.714671 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b685f" event={"ID":"d0ba54fc-0961-4e37-a830-fd9b7582b49b","Type":"ContainerStarted","Data":"709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b"} Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.758752 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b685f" podStartSLOduration=2.7261736020000003 podStartE2EDuration="58.758726597s" podCreationTimestamp="2026-01-23 18:03:34 +0000 UTC" firstStartedPulling="2026-01-23 18:03:36.171137262 +0000 UTC m=+159.173595195" lastFinishedPulling="2026-01-23 18:04:32.203690257 +0000 UTC m=+215.206148190" observedRunningTime="2026-01-23 18:04:32.755201112 +0000 UTC m=+215.757659045" watchObservedRunningTime="2026-01-23 18:04:32.758726597 +0000 UTC m=+215.761184530" Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.887603 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:04:32 crc kubenswrapper[4760]: I0123 18:04:32.887665 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:04:33 crc kubenswrapper[4760]: I0123 18:04:33.932609 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-x8kgm" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerName="registry-server" probeResult="failure" output=< Jan 23 18:04:33 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 23 18:04:33 crc kubenswrapper[4760]: > Jan 23 18:04:34 crc kubenswrapper[4760]: I0123 18:04:34.273509 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:04:34 crc kubenswrapper[4760]: I0123 18:04:34.273565 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:04:34 crc kubenswrapper[4760]: I0123 18:04:34.317563 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:04:34 crc kubenswrapper[4760]: I0123 18:04:34.675146 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:04:34 crc kubenswrapper[4760]: I0123 18:04:34.675192 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:04:34 crc kubenswrapper[4760]: I0123 18:04:34.713080 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:04:34 crc kubenswrapper[4760]: I0123 18:04:34.761918 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:04:35 crc kubenswrapper[4760]: I0123 18:04:35.248575 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:04:35 crc kubenswrapper[4760]: I0123 18:04:35.286980 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:04:35 crc kubenswrapper[4760]: I0123 18:04:35.305885 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:04:35 crc kubenswrapper[4760]: I0123 18:04:35.306531 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:04:36 crc kubenswrapper[4760]: I0123 18:04:36.345557 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j9ndf" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" containerName="registry-server" probeResult="failure" output=< Jan 23 18:04:36 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 23 18:04:36 crc kubenswrapper[4760]: > Jan 23 18:04:38 crc kubenswrapper[4760]: I0123 18:04:38.748452 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfxx4" event={"ID":"3500fe91-533f-4f4d-85c0-071bf05d5916","Type":"ContainerStarted","Data":"81823471a6def002c28ce26056bfa2f7fcd95c5bd15ce0d7b3c5dfc2d6ad6b7b"} Jan 23 18:04:38 crc kubenswrapper[4760]: I0123 18:04:38.767074 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xfxx4" podStartSLOduration=3.918610197 podStartE2EDuration="1m7.767054803s" podCreationTimestamp="2026-01-23 18:03:31 +0000 UTC" firstStartedPulling="2026-01-23 18:03:34.129522078 +0000 UTC m=+157.131980011" lastFinishedPulling="2026-01-23 18:04:37.977966674 +0000 UTC m=+220.980424617" observedRunningTime="2026-01-23 18:04:38.765112681 +0000 UTC m=+221.767570624" watchObservedRunningTime="2026-01-23 18:04:38.767054803 +0000 UTC m=+221.769512766" Jan 23 18:04:42 crc kubenswrapper[4760]: I0123 18:04:42.274017 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:04:42 crc kubenswrapper[4760]: I0123 18:04:42.545248 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:04:42 crc kubenswrapper[4760]: I0123 18:04:42.827247 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2jppg"] Jan 23 18:04:42 crc kubenswrapper[4760]: I0123 18:04:42.827512 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2jppg" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" containerName="registry-server" containerID="cri-o://0cd3b852b5ef244dfc1eb14ab66c124d6ea9c5361581c0562d7b24c359b1c3a5" gracePeriod=2 Jan 23 18:04:42 crc kubenswrapper[4760]: I0123 18:04:42.886785 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:04:42 crc kubenswrapper[4760]: I0123 18:04:42.886833 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:04:42 crc kubenswrapper[4760]: I0123 18:04:42.925218 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:04:42 crc kubenswrapper[4760]: I0123 18:04:42.927120 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:04:42 crc kubenswrapper[4760]: I0123 18:04:42.974756 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:04:43 crc kubenswrapper[4760]: I0123 18:04:43.808882 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:04:44 crc kubenswrapper[4760]: I0123 18:04:44.713858 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:04:45 crc kubenswrapper[4760]: I0123 18:04:45.224106 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xfxx4"] Jan 23 18:04:45 crc kubenswrapper[4760]: I0123 18:04:45.301319 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s4mrj"] Jan 23 18:04:45 crc kubenswrapper[4760]: I0123 18:04:45.363938 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:04:45 crc kubenswrapper[4760]: I0123 18:04:45.410011 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:04:45 crc kubenswrapper[4760]: I0123 18:04:45.786319 4760 generic.go:334] "Generic (PLEG): container finished" podID="c7132701-1753-45e3-abf6-09a7f589dddd" containerID="0cd3b852b5ef244dfc1eb14ab66c124d6ea9c5361581c0562d7b24c359b1c3a5" exitCode=0 Jan 23 18:04:45 crc kubenswrapper[4760]: I0123 18:04:45.787324 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jppg" event={"ID":"c7132701-1753-45e3-abf6-09a7f589dddd","Type":"ContainerDied","Data":"0cd3b852b5ef244dfc1eb14ab66c124d6ea9c5361581c0562d7b24c359b1c3a5"} Jan 23 18:04:45 crc kubenswrapper[4760]: I0123 18:04:45.787515 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xfxx4" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerName="registry-server" containerID="cri-o://81823471a6def002c28ce26056bfa2f7fcd95c5bd15ce0d7b3c5dfc2d6ad6b7b" gracePeriod=2 Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.075710 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.075795 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.075863 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.076728 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.076913 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880" gracePeriod=600 Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.569464 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.694014 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldvwq\" (UniqueName: \"kubernetes.io/projected/c7132701-1753-45e3-abf6-09a7f589dddd-kube-api-access-ldvwq\") pod \"c7132701-1753-45e3-abf6-09a7f589dddd\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.694087 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-utilities\") pod \"c7132701-1753-45e3-abf6-09a7f589dddd\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.694120 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-catalog-content\") pod \"c7132701-1753-45e3-abf6-09a7f589dddd\" (UID: \"c7132701-1753-45e3-abf6-09a7f589dddd\") " Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.695870 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-utilities" (OuterVolumeSpecName: "utilities") pod "c7132701-1753-45e3-abf6-09a7f589dddd" (UID: "c7132701-1753-45e3-abf6-09a7f589dddd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.703453 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7132701-1753-45e3-abf6-09a7f589dddd-kube-api-access-ldvwq" (OuterVolumeSpecName: "kube-api-access-ldvwq") pod "c7132701-1753-45e3-abf6-09a7f589dddd" (UID: "c7132701-1753-45e3-abf6-09a7f589dddd"). InnerVolumeSpecName "kube-api-access-ldvwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.746639 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7132701-1753-45e3-abf6-09a7f589dddd" (UID: "c7132701-1753-45e3-abf6-09a7f589dddd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.795294 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.795340 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7132701-1753-45e3-abf6-09a7f589dddd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.795363 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldvwq\" (UniqueName: \"kubernetes.io/projected/c7132701-1753-45e3-abf6-09a7f589dddd-kube-api-access-ldvwq\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.797056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jppg" event={"ID":"c7132701-1753-45e3-abf6-09a7f589dddd","Type":"ContainerDied","Data":"c52971881c94c6b5cb45d229301124568ccfee5e8b9d4574e07c700e7f33fbf0"} Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.797106 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jppg" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.797181 4760 scope.go:117] "RemoveContainer" containerID="0cd3b852b5ef244dfc1eb14ab66c124d6ea9c5361581c0562d7b24c359b1c3a5" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.801025 4760 generic.go:334] "Generic (PLEG): container finished" podID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerID="81823471a6def002c28ce26056bfa2f7fcd95c5bd15ce0d7b3c5dfc2d6ad6b7b" exitCode=0 Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.801093 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfxx4" event={"ID":"3500fe91-533f-4f4d-85c0-071bf05d5916","Type":"ContainerDied","Data":"81823471a6def002c28ce26056bfa2f7fcd95c5bd15ce0d7b3c5dfc2d6ad6b7b"} Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.803316 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880" exitCode=0 Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.803360 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880"} Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.814708 4760 scope.go:117] "RemoveContainer" containerID="7959a74dc3fb0e4f28a3b09803083c97a9bca080e2d124257f37a725294c2dfc" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.841423 4760 scope.go:117] "RemoveContainer" containerID="51a7755a7b38e5e9bd37bc8a6cf0898623ac2ae3a7c187c0827d24a076a4abeb" Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.845203 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2jppg"] Jan 23 18:04:46 crc kubenswrapper[4760]: I0123 18:04:46.848441 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2jppg"] Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.056922 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.198739 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f22bg\" (UniqueName: \"kubernetes.io/projected/3500fe91-533f-4f4d-85c0-071bf05d5916-kube-api-access-f22bg\") pod \"3500fe91-533f-4f4d-85c0-071bf05d5916\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.198917 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-utilities\") pod \"3500fe91-533f-4f4d-85c0-071bf05d5916\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.198994 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-catalog-content\") pod \"3500fe91-533f-4f4d-85c0-071bf05d5916\" (UID: \"3500fe91-533f-4f4d-85c0-071bf05d5916\") " Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.200258 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-utilities" (OuterVolumeSpecName: "utilities") pod "3500fe91-533f-4f4d-85c0-071bf05d5916" (UID: "3500fe91-533f-4f4d-85c0-071bf05d5916"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.202752 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3500fe91-533f-4f4d-85c0-071bf05d5916-kube-api-access-f22bg" (OuterVolumeSpecName: "kube-api-access-f22bg") pod "3500fe91-533f-4f4d-85c0-071bf05d5916" (UID: "3500fe91-533f-4f4d-85c0-071bf05d5916"). InnerVolumeSpecName "kube-api-access-f22bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.263707 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3500fe91-533f-4f4d-85c0-071bf05d5916" (UID: "3500fe91-533f-4f4d-85c0-071bf05d5916"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.300852 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.300896 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3500fe91-533f-4f4d-85c0-071bf05d5916-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.300910 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f22bg\" (UniqueName: \"kubernetes.io/projected/3500fe91-533f-4f4d-85c0-071bf05d5916-kube-api-access-f22bg\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.421139 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b685f"] Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.421352 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b685f" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerName="registry-server" containerID="cri-o://709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b" gracePeriod=2 Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.609915 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" path="/var/lib/kubelet/pods/c7132701-1753-45e3-abf6-09a7f589dddd/volumes" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.785276 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.809886 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"6642c703214bc9e29baea5dfa1c582c054b72a25e44988dec0644e8c8d5ca200"} Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.811790 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfxx4" event={"ID":"3500fe91-533f-4f4d-85c0-071bf05d5916","Type":"ContainerDied","Data":"2b1d52811e2f5a8faef349f1a255c60684322b2e4fe9bf4b5697b0a854d0571f"} Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.811823 4760 scope.go:117] "RemoveContainer" containerID="81823471a6def002c28ce26056bfa2f7fcd95c5bd15ce0d7b3c5dfc2d6ad6b7b" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.811894 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfxx4" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.822111 4760 generic.go:334] "Generic (PLEG): container finished" podID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerID="709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b" exitCode=0 Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.822166 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b685f" event={"ID":"d0ba54fc-0961-4e37-a830-fd9b7582b49b","Type":"ContainerDied","Data":"709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b"} Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.822176 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b685f" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.822195 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b685f" event={"ID":"d0ba54fc-0961-4e37-a830-fd9b7582b49b","Type":"ContainerDied","Data":"2f6122fcbef1b9edbaada39cf1dd7984dc397fa00d4891e53b373fc5cb03bbe6"} Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.830647 4760 scope.go:117] "RemoveContainer" containerID="2f4733136e18869c7a4ee1da0c7494e210b53580e37df3cd5730be4dd8c95739" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.851373 4760 scope.go:117] "RemoveContainer" containerID="786410984e0211995f2c64183fb4fdb1e59f3a1958eefb739bbd6bf0a590a8d0" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.855274 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xfxx4"] Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.859883 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xfxx4"] Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.874950 4760 scope.go:117] "RemoveContainer" containerID="709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.902569 4760 scope.go:117] "RemoveContainer" containerID="ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.906844 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-utilities\") pod \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.907195 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mtfp\" (UniqueName: \"kubernetes.io/projected/d0ba54fc-0961-4e37-a830-fd9b7582b49b-kube-api-access-9mtfp\") pod \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.907276 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-catalog-content\") pod \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\" (UID: \"d0ba54fc-0961-4e37-a830-fd9b7582b49b\") " Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.907791 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-utilities" (OuterVolumeSpecName: "utilities") pod "d0ba54fc-0961-4e37-a830-fd9b7582b49b" (UID: "d0ba54fc-0961-4e37-a830-fd9b7582b49b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.917518 4760 scope.go:117] "RemoveContainer" containerID="b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.919150 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0ba54fc-0961-4e37-a830-fd9b7582b49b-kube-api-access-9mtfp" (OuterVolumeSpecName: "kube-api-access-9mtfp") pod "d0ba54fc-0961-4e37-a830-fd9b7582b49b" (UID: "d0ba54fc-0961-4e37-a830-fd9b7582b49b"). InnerVolumeSpecName "kube-api-access-9mtfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.935181 4760 scope.go:117] "RemoveContainer" containerID="709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b" Jan 23 18:04:47 crc kubenswrapper[4760]: E0123 18:04:47.935853 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b\": container with ID starting with 709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b not found: ID does not exist" containerID="709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.935972 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b"} err="failed to get container status \"709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b\": rpc error: code = NotFound desc = could not find container \"709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b\": container with ID starting with 709759384f8ee173987eb1f6c85a8ce09c848ea6f0b37f2bf9b545100092282b not found: ID does not exist" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.936079 4760 scope.go:117] "RemoveContainer" containerID="ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8" Jan 23 18:04:47 crc kubenswrapper[4760]: E0123 18:04:47.936554 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8\": container with ID starting with ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8 not found: ID does not exist" containerID="ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.936651 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8"} err="failed to get container status \"ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8\": rpc error: code = NotFound desc = could not find container \"ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8\": container with ID starting with ab6081a6f263ac3e0489570dce1246871a3dbdd80246f7b8cc2926593816a8c8 not found: ID does not exist" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.936749 4760 scope.go:117] "RemoveContainer" containerID="b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac" Jan 23 18:04:47 crc kubenswrapper[4760]: E0123 18:04:47.938113 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac\": container with ID starting with b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac not found: ID does not exist" containerID="b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.938232 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac"} err="failed to get container status \"b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac\": rpc error: code = NotFound desc = could not find container \"b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac\": container with ID starting with b337706c860349102ad419e7e86e51aba80adc279aaee2c1393b6386b74b18ac not found: ID does not exist" Jan 23 18:04:47 crc kubenswrapper[4760]: I0123 18:04:47.942679 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0ba54fc-0961-4e37-a830-fd9b7582b49b" (UID: "d0ba54fc-0961-4e37-a830-fd9b7582b49b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:04:48 crc kubenswrapper[4760]: I0123 18:04:48.008824 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:48 crc kubenswrapper[4760]: I0123 18:04:48.008960 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0ba54fc-0961-4e37-a830-fd9b7582b49b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:48 crc kubenswrapper[4760]: I0123 18:04:48.008987 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mtfp\" (UniqueName: \"kubernetes.io/projected/d0ba54fc-0961-4e37-a830-fd9b7582b49b-kube-api-access-9mtfp\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:48 crc kubenswrapper[4760]: I0123 18:04:48.150565 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b685f"] Jan 23 18:04:48 crc kubenswrapper[4760]: I0123 18:04:48.154653 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b685f"] Jan 23 18:04:49 crc kubenswrapper[4760]: I0123 18:04:49.602901 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" path="/var/lib/kubelet/pods/3500fe91-533f-4f4d-85c0-071bf05d5916/volumes" Jan 23 18:04:49 crc kubenswrapper[4760]: I0123 18:04:49.604102 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" path="/var/lib/kubelet/pods/d0ba54fc-0961-4e37-a830-fd9b7582b49b/volumes" Jan 23 18:04:49 crc kubenswrapper[4760]: I0123 18:04:49.621998 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9ndf"] Jan 23 18:04:49 crc kubenswrapper[4760]: I0123 18:04:49.622269 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j9ndf" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" containerName="registry-server" containerID="cri-o://e6ff88a2cb7e11dc11773c10ad0d8499ce68d969de93dbd89c187e64d0113637" gracePeriod=2 Jan 23 18:04:49 crc kubenswrapper[4760]: I0123 18:04:49.849588 4760 generic.go:334] "Generic (PLEG): container finished" podID="cf43a009-6480-45b5-ad78-64e9f442686a" containerID="e6ff88a2cb7e11dc11773c10ad0d8499ce68d969de93dbd89c187e64d0113637" exitCode=0 Jan 23 18:04:49 crc kubenswrapper[4760]: I0123 18:04:49.849626 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9ndf" event={"ID":"cf43a009-6480-45b5-ad78-64e9f442686a","Type":"ContainerDied","Data":"e6ff88a2cb7e11dc11773c10ad0d8499ce68d969de93dbd89c187e64d0113637"} Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.007174 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.135035 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-catalog-content\") pod \"cf43a009-6480-45b5-ad78-64e9f442686a\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.135115 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5xn9\" (UniqueName: \"kubernetes.io/projected/cf43a009-6480-45b5-ad78-64e9f442686a-kube-api-access-h5xn9\") pod \"cf43a009-6480-45b5-ad78-64e9f442686a\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.135142 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-utilities\") pod \"cf43a009-6480-45b5-ad78-64e9f442686a\" (UID: \"cf43a009-6480-45b5-ad78-64e9f442686a\") " Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.136060 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-utilities" (OuterVolumeSpecName: "utilities") pod "cf43a009-6480-45b5-ad78-64e9f442686a" (UID: "cf43a009-6480-45b5-ad78-64e9f442686a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.143576 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf43a009-6480-45b5-ad78-64e9f442686a-kube-api-access-h5xn9" (OuterVolumeSpecName: "kube-api-access-h5xn9") pod "cf43a009-6480-45b5-ad78-64e9f442686a" (UID: "cf43a009-6480-45b5-ad78-64e9f442686a"). InnerVolumeSpecName "kube-api-access-h5xn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.236006 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5xn9\" (UniqueName: \"kubernetes.io/projected/cf43a009-6480-45b5-ad78-64e9f442686a-kube-api-access-h5xn9\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.236041 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.266778 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf43a009-6480-45b5-ad78-64e9f442686a" (UID: "cf43a009-6480-45b5-ad78-64e9f442686a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.337426 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf43a009-6480-45b5-ad78-64e9f442686a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.856841 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9ndf" event={"ID":"cf43a009-6480-45b5-ad78-64e9f442686a","Type":"ContainerDied","Data":"cf18c8e35bfeaf4ec11cabfb85d064601dae8f6a258a3054eb0332916ab1150c"} Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.856896 4760 scope.go:117] "RemoveContainer" containerID="e6ff88a2cb7e11dc11773c10ad0d8499ce68d969de93dbd89c187e64d0113637" Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.856906 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9ndf" Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.870386 4760 scope.go:117] "RemoveContainer" containerID="48cbccd81fffd36c754c2bc5780e849c2cf54462ce443e46b8aafb29f1a31873" Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.882342 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9ndf"] Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.888617 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j9ndf"] Jan 23 18:04:50 crc kubenswrapper[4760]: I0123 18:04:50.904540 4760 scope.go:117] "RemoveContainer" containerID="fd750587932728c710a393a67a643e365e182c7f39aba766778ce31071ad081d" Jan 23 18:04:51 crc kubenswrapper[4760]: I0123 18:04:51.600659 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" path="/var/lib/kubelet/pods/cf43a009-6480-45b5-ad78-64e9f442686a/volumes" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.689209 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.690121 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8" gracePeriod=15 Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.690179 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff" gracePeriod=15 Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.690290 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0" gracePeriod=15 Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.690304 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c" gracePeriod=15 Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.690319 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0" gracePeriod=15 Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.691718 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692031 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerName="extract-utilities" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692050 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerName="extract-utilities" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692062 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692074 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692088 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerName="extract-utilities" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692099 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerName="extract-utilities" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692111 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692124 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692137 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692149 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692166 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" containerName="extract-utilities" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692178 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" containerName="extract-utilities" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692195 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" containerName="extract-content" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692206 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" containerName="extract-content" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692227 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692239 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692250 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692263 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692276 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerName="extract-content" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692288 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerName="extract-content" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692302 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692312 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692330 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" containerName="extract-content" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692341 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" containerName="extract-content" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692354 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692365 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692383 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692394 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692433 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" containerName="extract-utilities" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692444 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" containerName="extract-utilities" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692464 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692475 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692492 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerName="extract-content" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692503 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerName="extract-content" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692519 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692530 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 18:04:59 crc kubenswrapper[4760]: E0123 18:04:59.692550 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692561 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692711 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692728 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7132701-1753-45e3-abf6-09a7f589dddd" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692743 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692758 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692769 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692787 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692800 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf43a009-6480-45b5-ad78-64e9f442686a" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692814 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0ba54fc-0961-4e37-a830-fd9b7582b49b" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.692827 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3500fe91-533f-4f4d-85c0-071bf05d5916" containerName="registry-server" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.693134 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.696361 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.697099 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.702293 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.858312 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.858620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.858682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.858709 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.858869 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.858946 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.858990 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.859028 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.905167 4760 generic.go:334] "Generic (PLEG): container finished" podID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" containerID="00bdb585ad3aee6ba3329813e3060a9c6aea40d04ddd5006aa9878e72793d0b8" exitCode=0 Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.905232 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"690bc7ab-1ccc-49ce-a86f-18a94fa042c7","Type":"ContainerDied","Data":"00bdb585ad3aee6ba3329813e3060a9c6aea40d04ddd5006aa9878e72793d0b8"} Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.905895 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.907537 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.908850 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.910076 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff" exitCode=0 Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.910100 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0" exitCode=0 Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.910107 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c" exitCode=0 Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.910113 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0" exitCode=2 Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.910192 4760 scope.go:117] "RemoveContainer" containerID="3ea69ef3c74526b4e7b1e85e8f74a6736833cfd02bf8a27bb55bd77f3c633337" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959448 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959514 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959542 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959565 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959596 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959615 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959619 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959647 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959654 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959648 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959674 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959697 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959711 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959677 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959762 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:04:59 crc kubenswrapper[4760]: I0123 18:04:59.959776 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:00 crc kubenswrapper[4760]: I0123 18:05:00.917691 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.199038 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.200024 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.375121 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-var-lock\") pod \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.375236 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kube-api-access\") pod \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.375275 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kubelet-dir\") pod \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\" (UID: \"690bc7ab-1ccc-49ce-a86f-18a94fa042c7\") " Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.375497 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "690bc7ab-1ccc-49ce-a86f-18a94fa042c7" (UID: "690bc7ab-1ccc-49ce-a86f-18a94fa042c7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.375864 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.376157 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-var-lock" (OuterVolumeSpecName: "var-lock") pod "690bc7ab-1ccc-49ce-a86f-18a94fa042c7" (UID: "690bc7ab-1ccc-49ce-a86f-18a94fa042c7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.380345 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "690bc7ab-1ccc-49ce-a86f-18a94fa042c7" (UID: "690bc7ab-1ccc-49ce-a86f-18a94fa042c7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.476717 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.476760 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/690bc7ab-1ccc-49ce-a86f-18a94fa042c7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:01 crc kubenswrapper[4760]: E0123 18:05:01.678295 4760 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.90:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" volumeName="registry-storage" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.924705 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"690bc7ab-1ccc-49ce-a86f-18a94fa042c7","Type":"ContainerDied","Data":"8cb5d798c909bce5ab9dab192e79f4b249327788a862d00f3f90dc10a8a93cf0"} Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.924750 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb5d798c909bce5ab9dab192e79f4b249327788a862d00f3f90dc10a8a93cf0" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.924814 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:05:01 crc kubenswrapper[4760]: I0123 18:05:01.929099 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.560636 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.561386 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.562323 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.562818 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.694390 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.694506 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.694547 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.694715 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.694751 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.694768 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.795656 4760 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.796376 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.796389 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.936636 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.939493 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8" exitCode=0 Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.939576 4760 scope.go:117] "RemoveContainer" containerID="bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.939690 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.955842 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.956400 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.967683 4760 scope.go:117] "RemoveContainer" containerID="040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0" Jan 23 18:05:02 crc kubenswrapper[4760]: I0123 18:05:02.988402 4760 scope.go:117] "RemoveContainer" containerID="9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.001608 4760 scope.go:117] "RemoveContainer" containerID="2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.012196 4760 scope.go:117] "RemoveContainer" containerID="ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.028234 4760 scope.go:117] "RemoveContainer" containerID="a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.049598 4760 scope.go:117] "RemoveContainer" containerID="bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff" Jan 23 18:05:03 crc kubenswrapper[4760]: E0123 18:05:03.050180 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\": container with ID starting with bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff not found: ID does not exist" containerID="bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.050219 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff"} err="failed to get container status \"bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\": rpc error: code = NotFound desc = could not find container \"bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff\": container with ID starting with bbcbe2a8dc4c0fa0a8868cdb5418ecff185b923cb169bf87fac61b189033d8ff not found: ID does not exist" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.050245 4760 scope.go:117] "RemoveContainer" containerID="040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0" Jan 23 18:05:03 crc kubenswrapper[4760]: E0123 18:05:03.052677 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\": container with ID starting with 040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0 not found: ID does not exist" containerID="040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.052714 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0"} err="failed to get container status \"040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\": rpc error: code = NotFound desc = could not find container \"040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0\": container with ID starting with 040676e79311433681961ed2ebc67468b17806ba1ba130be070fb391e650d4a0 not found: ID does not exist" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.052742 4760 scope.go:117] "RemoveContainer" containerID="9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c" Jan 23 18:05:03 crc kubenswrapper[4760]: E0123 18:05:03.053119 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\": container with ID starting with 9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c not found: ID does not exist" containerID="9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.053183 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c"} err="failed to get container status \"9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\": rpc error: code = NotFound desc = could not find container \"9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c\": container with ID starting with 9e928f244dde165aa7289fa5f6e5c337d52782c257b4c5288c4a760a3533714c not found: ID does not exist" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.053247 4760 scope.go:117] "RemoveContainer" containerID="2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0" Jan 23 18:05:03 crc kubenswrapper[4760]: E0123 18:05:03.053798 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\": container with ID starting with 2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0 not found: ID does not exist" containerID="2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.053819 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0"} err="failed to get container status \"2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\": rpc error: code = NotFound desc = could not find container \"2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0\": container with ID starting with 2003e9aa3615c44608fc6a8a34f3d5b2267774c560af5957b2cce6e18878c3d0 not found: ID does not exist" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.053834 4760 scope.go:117] "RemoveContainer" containerID="ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8" Jan 23 18:05:03 crc kubenswrapper[4760]: E0123 18:05:03.054031 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\": container with ID starting with ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8 not found: ID does not exist" containerID="ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.054054 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8"} err="failed to get container status \"ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\": rpc error: code = NotFound desc = could not find container \"ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8\": container with ID starting with ba9c95f388bc271663cd2e78c62b5a897275b82ce846acc2399a46f3bb9c79d8 not found: ID does not exist" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.054073 4760 scope.go:117] "RemoveContainer" containerID="a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367" Jan 23 18:05:03 crc kubenswrapper[4760]: E0123 18:05:03.054242 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\": container with ID starting with a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367 not found: ID does not exist" containerID="a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.054256 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367"} err="failed to get container status \"a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\": rpc error: code = NotFound desc = could not find container \"a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367\": container with ID starting with a119918241848f998a5ce2b0e3ef26f06017809d93d5845e992837481bd86367 not found: ID does not exist" Jan 23 18:05:03 crc kubenswrapper[4760]: I0123 18:05:03.603091 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 18:05:04 crc kubenswrapper[4760]: E0123 18:05:04.727653 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.90:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:05:04 crc kubenswrapper[4760]: I0123 18:05:04.728817 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:05:04 crc kubenswrapper[4760]: E0123 18:05:04.753637 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.90:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d6e4ccc6f0444 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 18:05:04.752870468 +0000 UTC m=+247.755328401,LastTimestamp:2026-01-23 18:05:04.752870468 +0000 UTC m=+247.755328401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 18:05:04 crc kubenswrapper[4760]: I0123 18:05:04.951129 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e814cd150596c52bc0888b5ca364feb644b594ec1d392e4812e775798674b034"} Jan 23 18:05:05 crc kubenswrapper[4760]: E0123 18:05:05.491822 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:05 crc kubenswrapper[4760]: E0123 18:05:05.491999 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:05 crc kubenswrapper[4760]: E0123 18:05:05.492149 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:05 crc kubenswrapper[4760]: E0123 18:05:05.492293 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:05 crc kubenswrapper[4760]: E0123 18:05:05.492490 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:05 crc kubenswrapper[4760]: I0123 18:05:05.492518 4760 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 18:05:05 crc kubenswrapper[4760]: E0123 18:05:05.492677 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="200ms" Jan 23 18:05:05 crc kubenswrapper[4760]: E0123 18:05:05.693252 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="400ms" Jan 23 18:05:05 crc kubenswrapper[4760]: I0123 18:05:05.958172 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"771e251b5db3df03345a062d76fc8e25a7cad9fb206e70584e350538cf8f50ea"} Jan 23 18:05:05 crc kubenswrapper[4760]: I0123 18:05:05.958784 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:05 crc kubenswrapper[4760]: E0123 18:05:05.958868 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.90:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:05:06 crc kubenswrapper[4760]: E0123 18:05:06.094303 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="800ms" Jan 23 18:05:06 crc kubenswrapper[4760]: E0123 18:05:06.895706 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="1.6s" Jan 23 18:05:06 crc kubenswrapper[4760]: E0123 18:05:06.964891 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.90:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:05:07 crc kubenswrapper[4760]: I0123 18:05:07.599865 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:08 crc kubenswrapper[4760]: E0123 18:05:08.497071 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="3.2s" Jan 23 18:05:09 crc kubenswrapper[4760]: E0123 18:05:09.293819 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.90:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d6e4ccc6f0444 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 18:05:04.752870468 +0000 UTC m=+247.755328401,LastTimestamp:2026-01-23 18:05:04.752870468 +0000 UTC m=+247.755328401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.334088 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" containerName="oauth-openshift" containerID="cri-o://67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5" gracePeriod=15 Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.695957 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.696931 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.697663 4760 status_manager.go:851] "Failed to get status for pod" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-s4mrj\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.893875 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-error\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894025 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-serving-cert\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894125 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-policies\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894153 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-session\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894190 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-trusted-ca-bundle\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894242 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-dir\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894297 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-service-ca\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894326 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-router-certs\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894356 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbxbc\" (UniqueName: \"kubernetes.io/projected/37543438-dc29-44d6-a46e-8864aa3fcad4-kube-api-access-qbxbc\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894381 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-provider-selection\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894461 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-ocp-branding-template\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894462 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.894497 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-cliconfig\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.896687 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.897062 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.897393 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.897958 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-idp-0-file-data\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.898337 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-login\") pod \"37543438-dc29-44d6-a46e-8864aa3fcad4\" (UID: \"37543438-dc29-44d6-a46e-8864aa3fcad4\") " Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.900320 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.904299 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.904374 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.904475 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.904513 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37543438-dc29-44d6-a46e-8864aa3fcad4-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.904544 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.904301 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37543438-dc29-44d6-a46e-8864aa3fcad4-kube-api-access-qbxbc" (OuterVolumeSpecName: "kube-api-access-qbxbc") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "kube-api-access-qbxbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.904762 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.905635 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.906571 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.908739 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.909154 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.909516 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.913231 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.913840 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "37543438-dc29-44d6-a46e-8864aa3fcad4" (UID: "37543438-dc29-44d6-a46e-8864aa3fcad4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.992123 4760 generic.go:334] "Generic (PLEG): container finished" podID="37543438-dc29-44d6-a46e-8864aa3fcad4" containerID="67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5" exitCode=0 Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.992326 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.992371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" event={"ID":"37543438-dc29-44d6-a46e-8864aa3fcad4","Type":"ContainerDied","Data":"67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5"} Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.992963 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" event={"ID":"37543438-dc29-44d6-a46e-8864aa3fcad4","Type":"ContainerDied","Data":"a2075edd79e9404dae38dd8ce8720afcd639b1591b56d9ebbe94bea8b21100c6"} Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.992987 4760 scope.go:117] "RemoveContainer" containerID="67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.993562 4760 status_manager.go:851] "Failed to get status for pod" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-s4mrj\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:10 crc kubenswrapper[4760]: I0123 18:05:10.993858 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.005462 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.005485 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.005495 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.005504 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.005514 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.005524 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.005534 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbxbc\" (UniqueName: \"kubernetes.io/projected/37543438-dc29-44d6-a46e-8864aa3fcad4-kube-api-access-qbxbc\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.005543 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.005554 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/37543438-dc29-44d6-a46e-8864aa3fcad4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.011142 4760 status_manager.go:851] "Failed to get status for pod" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-s4mrj\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.011452 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.013940 4760 scope.go:117] "RemoveContainer" containerID="67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5" Jan 23 18:05:11 crc kubenswrapper[4760]: E0123 18:05:11.014504 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5\": container with ID starting with 67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5 not found: ID does not exist" containerID="67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5" Jan 23 18:05:11 crc kubenswrapper[4760]: I0123 18:05:11.014564 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5"} err="failed to get container status \"67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5\": rpc error: code = NotFound desc = could not find container \"67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5\": container with ID starting with 67fdf9fff9cc2c963e2ce77489f3382a0e22d976f59c02807c1c2e6d6302ded5 not found: ID does not exist" Jan 23 18:05:11 crc kubenswrapper[4760]: E0123 18:05:11.698143 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.90:6443: connect: connection refused" interval="6.4s" Jan 23 18:05:12 crc kubenswrapper[4760]: I0123 18:05:12.595133 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:12 crc kubenswrapper[4760]: I0123 18:05:12.596602 4760 status_manager.go:851] "Failed to get status for pod" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-s4mrj\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:12 crc kubenswrapper[4760]: I0123 18:05:12.597195 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:12 crc kubenswrapper[4760]: I0123 18:05:12.622210 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:12 crc kubenswrapper[4760]: I0123 18:05:12.622253 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:12 crc kubenswrapper[4760]: E0123 18:05:12.622802 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:12 crc kubenswrapper[4760]: I0123 18:05:12.623571 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.016535 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.016837 4760 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4" exitCode=1 Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.016922 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4"} Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.017792 4760 scope.go:117] "RemoveContainer" containerID="15457cbac62f5e9e84dfda9ed1537a5ca5a057bd05d7d0702f05d65ef17c82a4" Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.017938 4760 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.018510 4760 status_manager.go:851] "Failed to get status for pod" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-s4mrj\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.018855 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.019256 4760 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fc3f53e79b1c9e0f9be208a2804a6f45e88287b0866987ded68390668e7a655a" exitCode=0 Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.019287 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fc3f53e79b1c9e0f9be208a2804a6f45e88287b0866987ded68390668e7a655a"} Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.019311 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e21de66321983ccb93998c08b5c5a2f01f4d04c0a6f7585a746d90961a0ddc3f"} Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.019554 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.019569 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:13 crc kubenswrapper[4760]: E0123 18:05:13.019910 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.020306 4760 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.020892 4760 status_manager.go:851] "Failed to get status for pod" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" pod="openshift-authentication/oauth-openshift-558db77b4-s4mrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-s4mrj\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:13 crc kubenswrapper[4760]: I0123 18:05:13.021595 4760 status_manager.go:851] "Failed to get status for pod" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.90:6443: connect: connection refused" Jan 23 18:05:14 crc kubenswrapper[4760]: I0123 18:05:14.021765 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:05:14 crc kubenswrapper[4760]: I0123 18:05:14.030677 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 18:05:14 crc kubenswrapper[4760]: I0123 18:05:14.030990 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ced57ae32afe757c65c0190a090fcb4c22e07f8120697869a91761d9155fd3fe"} Jan 23 18:05:14 crc kubenswrapper[4760]: I0123 18:05:14.034236 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"65662c8ad53697276e819cfd9d05e3bd57837146d80210aab758b6b2da93c61b"} Jan 23 18:05:14 crc kubenswrapper[4760]: I0123 18:05:14.034275 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9597ea669234e7d1441f3ea3a0966405729b568966346009043d9636c4f072de"} Jan 23 18:05:14 crc kubenswrapper[4760]: I0123 18:05:14.034290 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"68f300503c5516da6dc87195ceca9a3698d3724da1c23eab517d7e45d7ae55c0"} Jan 23 18:05:14 crc kubenswrapper[4760]: I0123 18:05:14.034303 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"846ac3497d72f92da8011d591ba3165b69e334354d5efc159d32e78162916ebe"} Jan 23 18:05:14 crc kubenswrapper[4760]: I0123 18:05:14.034315 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"056dad801e7c2622295cf3394680e29d3deb68bfa6369fe47e52b81039595935"} Jan 23 18:05:14 crc kubenswrapper[4760]: I0123 18:05:14.034585 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:14 crc kubenswrapper[4760]: I0123 18:05:14.034607 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:17 crc kubenswrapper[4760]: I0123 18:05:17.623958 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:17 crc kubenswrapper[4760]: I0123 18:05:17.624307 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:17 crc kubenswrapper[4760]: I0123 18:05:17.638296 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:19 crc kubenswrapper[4760]: I0123 18:05:19.572573 4760 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:19 crc kubenswrapper[4760]: I0123 18:05:19.693762 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0d023737-9e22-4dad-aee9-e750a2cd06c5" Jan 23 18:05:20 crc kubenswrapper[4760]: I0123 18:05:20.067343 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:20 crc kubenswrapper[4760]: I0123 18:05:20.067641 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:20 crc kubenswrapper[4760]: I0123 18:05:20.067721 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:20 crc kubenswrapper[4760]: I0123 18:05:20.070881 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0d023737-9e22-4dad-aee9-e750a2cd06c5" Jan 23 18:05:20 crc kubenswrapper[4760]: I0123 18:05:20.073189 4760 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://056dad801e7c2622295cf3394680e29d3deb68bfa6369fe47e52b81039595935" Jan 23 18:05:20 crc kubenswrapper[4760]: I0123 18:05:20.073219 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:20 crc kubenswrapper[4760]: I0123 18:05:20.361348 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:05:21 crc kubenswrapper[4760]: I0123 18:05:21.074033 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:21 crc kubenswrapper[4760]: I0123 18:05:21.074326 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:21 crc kubenswrapper[4760]: I0123 18:05:21.077434 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0d023737-9e22-4dad-aee9-e750a2cd06c5" Jan 23 18:05:22 crc kubenswrapper[4760]: I0123 18:05:22.080054 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:22 crc kubenswrapper[4760]: I0123 18:05:22.080106 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:22 crc kubenswrapper[4760]: I0123 18:05:22.083941 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0d023737-9e22-4dad-aee9-e750a2cd06c5" Jan 23 18:05:24 crc kubenswrapper[4760]: I0123 18:05:24.022260 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:05:24 crc kubenswrapper[4760]: I0123 18:05:24.031194 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:05:24 crc kubenswrapper[4760]: I0123 18:05:24.098990 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:05:30 crc kubenswrapper[4760]: I0123 18:05:30.144591 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 18:05:30 crc kubenswrapper[4760]: I0123 18:05:30.244062 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 18:05:30 crc kubenswrapper[4760]: I0123 18:05:30.248717 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 18:05:31 crc kubenswrapper[4760]: I0123 18:05:31.634659 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 18:05:31 crc kubenswrapper[4760]: I0123 18:05:31.730402 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.053401 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.090584 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.134004 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.319809 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.409285 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.488985 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.516588 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.673448 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.716282 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.731991 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.877760 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.929881 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 18:05:32 crc kubenswrapper[4760]: I0123 18:05:32.931995 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.063603 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.243377 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.362273 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.573623 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.620437 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.626539 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.636508 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.793181 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.802755 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.894500 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.959576 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 18:05:33 crc kubenswrapper[4760]: I0123 18:05:33.989003 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.008033 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.017811 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.138205 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.148966 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.160685 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.174562 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.263613 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.282829 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.384614 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.510154 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.552490 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.567472 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.574050 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.681838 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.692250 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.713385 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.845363 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.886435 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.921216 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.932213 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 18:05:34 crc kubenswrapper[4760]: I0123 18:05:34.970866 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.012063 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.091588 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.185705 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.291371 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.380122 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.429037 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.455804 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.455891 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.469386 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.485554 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.510004 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.531856 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.549284 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.567461 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.620532 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.691232 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.740112 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.858648 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.921327 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 18:05:35 crc kubenswrapper[4760]: I0123 18:05:35.979255 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 18:05:36 crc kubenswrapper[4760]: I0123 18:05:36.223327 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 18:05:36 crc kubenswrapper[4760]: I0123 18:05:36.402931 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 18:05:36 crc kubenswrapper[4760]: I0123 18:05:36.452712 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 18:05:36 crc kubenswrapper[4760]: I0123 18:05:36.537049 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 18:05:36 crc kubenswrapper[4760]: I0123 18:05:36.586662 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 18:05:36 crc kubenswrapper[4760]: I0123 18:05:36.687870 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 18:05:36 crc kubenswrapper[4760]: I0123 18:05:36.776503 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 18:05:36 crc kubenswrapper[4760]: I0123 18:05:36.825379 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 18:05:36 crc kubenswrapper[4760]: I0123 18:05:36.831573 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 18:05:36 crc kubenswrapper[4760]: I0123 18:05:36.898540 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.006107 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.019380 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.390991 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.537279 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.558764 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.562902 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.712759 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.797915 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.813004 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.857827 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.873977 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.939154 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 18:05:37 crc kubenswrapper[4760]: I0123 18:05:37.960215 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 18:05:38 crc kubenswrapper[4760]: I0123 18:05:38.019758 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 18:05:38 crc kubenswrapper[4760]: I0123 18:05:38.061288 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 18:05:38 crc kubenswrapper[4760]: I0123 18:05:38.089508 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 18:05:38 crc kubenswrapper[4760]: I0123 18:05:38.377370 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 18:05:38 crc kubenswrapper[4760]: I0123 18:05:38.384266 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 18:05:38 crc kubenswrapper[4760]: I0123 18:05:38.457025 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 18:05:38 crc kubenswrapper[4760]: I0123 18:05:38.699854 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 18:05:38 crc kubenswrapper[4760]: I0123 18:05:38.762807 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 18:05:38 crc kubenswrapper[4760]: I0123 18:05:38.784333 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 18:05:38 crc kubenswrapper[4760]: I0123 18:05:38.955016 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.011225 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.044147 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.056831 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.089991 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.093220 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.217705 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.222182 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.244784 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.247668 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.431989 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.564639 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.633884 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.637932 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.719508 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.723197 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.797323 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.828164 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.873225 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.882166 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 18:05:39 crc kubenswrapper[4760]: I0123 18:05:39.930991 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.041051 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.047475 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.068036 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.068539 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.078581 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.085307 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.098858 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.113257 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.180783 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.196698 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.197689 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.263106 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.269557 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.326502 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.387022 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.448285 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.452256 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s4mrj","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.452324 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-8vhp6","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 18:05:40 crc kubenswrapper[4760]: E0123 18:05:40.452498 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" containerName="installer" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.452515 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" containerName="installer" Jan 23 18:05:40 crc kubenswrapper[4760]: E0123 18:05:40.452527 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" containerName="oauth-openshift" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.452533 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" containerName="oauth-openshift" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.452629 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" containerName="oauth-openshift" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.452641 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="690bc7ab-1ccc-49ce-a86f-18a94fa042c7" containerName="installer" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.452688 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.452713 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90406fa4-aa7b-4dcd-a275-255c5b4a38b1" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.453002 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.455723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.455822 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.456466 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.456494 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.456502 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.456520 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.456853 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.457177 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.457200 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.457262 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.457208 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.457462 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.460167 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.465622 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.467221 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.471896 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.485111 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.487859 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.487839993 podStartE2EDuration="21.487839993s" podCreationTimestamp="2026-01-23 18:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:05:40.477238848 +0000 UTC m=+283.479696791" watchObservedRunningTime="2026-01-23 18:05:40.487839993 +0000 UTC m=+283.490297936" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.553925 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.558837 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.559391 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.572256 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.574550 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576185 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576303 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576394 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576511 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576628 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7klrb\" (UniqueName: \"kubernetes.io/projected/3bd06ef3-a390-4434-a03b-c650d8f46fb3-kube-api-access-7klrb\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576702 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576741 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576811 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bd06ef3-a390-4434-a03b-c650d8f46fb3-audit-dir\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576831 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576850 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576879 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576894 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-audit-policies\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.576912 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.608114 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.632832 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.641432 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.678260 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.679041 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.679252 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.679500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7klrb\" (UniqueName: \"kubernetes.io/projected/3bd06ef3-a390-4434-a03b-c650d8f46fb3-kube-api-access-7klrb\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.679696 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.679871 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.680065 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bd06ef3-a390-4434-a03b-c650d8f46fb3-audit-dir\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.680225 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.680388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.680439 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.680535 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-audit-policies\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.680557 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.680586 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.680616 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.680674 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.680253 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.681097 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bd06ef3-a390-4434-a03b-c650d8f46fb3-audit-dir\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.681762 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-audit-policies\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.683778 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.684543 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.685345 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.685422 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.687453 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.690304 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.691470 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.691571 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.694110 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3bd06ef3-a390-4434-a03b-c650d8f46fb3-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.699521 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.720730 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7klrb\" (UniqueName: \"kubernetes.io/projected/3bd06ef3-a390-4434-a03b-c650d8f46fb3-kube-api-access-7klrb\") pod \"oauth-openshift-56c7c74f4-8vhp6\" (UID: \"3bd06ef3-a390-4434-a03b-c650d8f46fb3\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.773232 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.964786 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-8vhp6"] Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.967962 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.976292 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.995601 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 18:05:40 crc kubenswrapper[4760]: I0123 18:05:40.996093 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://771e251b5db3df03345a062d76fc8e25a7cad9fb206e70584e350538cf8f50ea" gracePeriod=5 Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.137942 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.207453 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" event={"ID":"3bd06ef3-a390-4434-a03b-c650d8f46fb3","Type":"ContainerStarted","Data":"75c7800d36703f393960159d1770ac660efa488c2e2b4653a189fbcdbec9f6d9"} Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.207491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" event={"ID":"3bd06ef3-a390-4434-a03b-c650d8f46fb3","Type":"ContainerStarted","Data":"9aa4e8e1fb540eb454e6ab82f7195304b3a7ab5dfa6c0c5e3d5953429c15547b"} Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.207956 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.208870 4760 patch_prober.go:28] interesting pod/oauth-openshift-56c7c74f4-8vhp6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.208918 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" podUID="3bd06ef3-a390-4434-a03b-c650d8f46fb3" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.433627 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.439149 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.477778 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.479712 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.505752 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.609602 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37543438-dc29-44d6-a46e-8864aa3fcad4" path="/var/lib/kubelet/pods/37543438-dc29-44d6-a46e-8864aa3fcad4/volumes" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.709148 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.731888 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.747749 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.790540 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.805199 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.830570 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.877785 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.894280 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 18:05:41 crc kubenswrapper[4760]: I0123 18:05:41.951083 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.025381 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.041710 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.068649 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.105969 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.151010 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.152391 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.214580 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.240638 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56c7c74f4-8vhp6" podStartSLOduration=57.240619802 podStartE2EDuration="57.240619802s" podCreationTimestamp="2026-01-23 18:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:05:41.22824161 +0000 UTC m=+284.230699543" watchObservedRunningTime="2026-01-23 18:05:42.240619802 +0000 UTC m=+285.243077745" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.387749 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.395016 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.450722 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.516209 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.517794 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.553104 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.563282 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.587115 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.591787 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.710372 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.867353 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.907482 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.971893 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 18:05:42 crc kubenswrapper[4760]: I0123 18:05:42.981945 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.027854 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.055849 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.062867 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.129900 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.189114 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.194890 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.248527 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.335965 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.401374 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.535052 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.562983 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.718251 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 18:05:43 crc kubenswrapper[4760]: I0123 18:05:43.918326 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.004053 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.255496 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.283813 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.308611 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.326425 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.330771 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.334249 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.367051 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.380235 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.394536 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.547364 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.581036 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.715600 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.789135 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.791356 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.850023 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.850459 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.861198 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 18:05:44 crc kubenswrapper[4760]: I0123 18:05:44.952055 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 18:05:45 crc kubenswrapper[4760]: I0123 18:05:45.176670 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 18:05:45 crc kubenswrapper[4760]: I0123 18:05:45.376723 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 18:05:45 crc kubenswrapper[4760]: I0123 18:05:45.470261 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 18:05:45 crc kubenswrapper[4760]: I0123 18:05:45.505319 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 18:05:45 crc kubenswrapper[4760]: I0123 18:05:45.632700 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 18:05:45 crc kubenswrapper[4760]: I0123 18:05:45.773683 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 18:05:45 crc kubenswrapper[4760]: I0123 18:05:45.824132 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.102289 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.163449 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.232720 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.233038 4760 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="771e251b5db3df03345a062d76fc8e25a7cad9fb206e70584e350538cf8f50ea" exitCode=137 Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.267952 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.276281 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.496192 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.573944 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.574040 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.765633 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.765712 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.765777 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.765816 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.765876 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.765890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.765905 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.765967 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.766074 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.766143 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.766159 4760 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.766171 4760 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.766182 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.775007 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.809337 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 18:05:46 crc kubenswrapper[4760]: I0123 18:05:46.867281 4760 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:47 crc kubenswrapper[4760]: I0123 18:05:47.244544 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 18:05:47 crc kubenswrapper[4760]: I0123 18:05:47.244645 4760 scope.go:117] "RemoveContainer" containerID="771e251b5db3df03345a062d76fc8e25a7cad9fb206e70584e350538cf8f50ea" Jan 23 18:05:47 crc kubenswrapper[4760]: I0123 18:05:47.244752 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:05:47 crc kubenswrapper[4760]: I0123 18:05:47.305493 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 18:05:47 crc kubenswrapper[4760]: I0123 18:05:47.309949 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 18:05:47 crc kubenswrapper[4760]: I0123 18:05:47.607819 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 18:05:47 crc kubenswrapper[4760]: I0123 18:05:47.995076 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 18:05:48 crc kubenswrapper[4760]: I0123 18:05:48.038610 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 18:05:48 crc kubenswrapper[4760]: I0123 18:05:48.088770 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 18:05:57 crc kubenswrapper[4760]: I0123 18:05:57.342764 4760 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 18:06:02 crc kubenswrapper[4760]: I0123 18:06:02.336338 4760 generic.go:334] "Generic (PLEG): container finished" podID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerID="62984e774ea701ec3386ea7348b41ad61c2075c6f755e60ac7e1f5868efb4523" exitCode=0 Jan 23 18:06:02 crc kubenswrapper[4760]: I0123 18:06:02.336447 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" event={"ID":"7106b645-deaf-47b1-9d00-5050fdd7b040","Type":"ContainerDied","Data":"62984e774ea701ec3386ea7348b41ad61c2075c6f755e60ac7e1f5868efb4523"} Jan 23 18:06:02 crc kubenswrapper[4760]: I0123 18:06:02.337513 4760 scope.go:117] "RemoveContainer" containerID="62984e774ea701ec3386ea7348b41ad61c2075c6f755e60ac7e1f5868efb4523" Jan 23 18:06:03 crc kubenswrapper[4760]: I0123 18:06:03.346088 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" event={"ID":"7106b645-deaf-47b1-9d00-5050fdd7b040","Type":"ContainerStarted","Data":"8c9538091a2de3236d2bd0f5f6c79121c26789ad0030e58da3254a710b9ee670"} Jan 23 18:06:03 crc kubenswrapper[4760]: I0123 18:06:03.346956 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:06:03 crc kubenswrapper[4760]: I0123 18:06:03.349013 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.402508 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rv79c"] Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.403204 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" podUID="e56ff0b4-3551-4388-af57-ed219cde17de" containerName="controller-manager" containerID="cri-o://e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60" gracePeriod=30 Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.495766 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx"] Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.496110 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" podUID="04f83f2f-9119-42d7-b712-06dc0ef0adfd" containerName="route-controller-manager" containerID="cri-o://f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952" gracePeriod=30 Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.775157 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.820796 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.918504 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hdwt\" (UniqueName: \"kubernetes.io/projected/e56ff0b4-3551-4388-af57-ed219cde17de-kube-api-access-7hdwt\") pod \"e56ff0b4-3551-4388-af57-ed219cde17de\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.918558 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e56ff0b4-3551-4388-af57-ed219cde17de-serving-cert\") pod \"e56ff0b4-3551-4388-af57-ed219cde17de\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.918591 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-config\") pod \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.918613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-config\") pod \"e56ff0b4-3551-4388-af57-ed219cde17de\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.918670 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-client-ca\") pod \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.918712 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-proxy-ca-bundles\") pod \"e56ff0b4-3551-4388-af57-ed219cde17de\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.918759 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f83f2f-9119-42d7-b712-06dc0ef0adfd-serving-cert\") pod \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.918813 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxvb7\" (UniqueName: \"kubernetes.io/projected/04f83f2f-9119-42d7-b712-06dc0ef0adfd-kube-api-access-gxvb7\") pod \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\" (UID: \"04f83f2f-9119-42d7-b712-06dc0ef0adfd\") " Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.918841 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-client-ca\") pod \"e56ff0b4-3551-4388-af57-ed219cde17de\" (UID: \"e56ff0b4-3551-4388-af57-ed219cde17de\") " Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.919958 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e56ff0b4-3551-4388-af57-ed219cde17de" (UID: "e56ff0b4-3551-4388-af57-ed219cde17de"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.920005 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-client-ca" (OuterVolumeSpecName: "client-ca") pod "e56ff0b4-3551-4388-af57-ed219cde17de" (UID: "e56ff0b4-3551-4388-af57-ed219cde17de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.920073 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-config" (OuterVolumeSpecName: "config") pod "04f83f2f-9119-42d7-b712-06dc0ef0adfd" (UID: "04f83f2f-9119-42d7-b712-06dc0ef0adfd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.920087 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-client-ca" (OuterVolumeSpecName: "client-ca") pod "04f83f2f-9119-42d7-b712-06dc0ef0adfd" (UID: "04f83f2f-9119-42d7-b712-06dc0ef0adfd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.920073 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-config" (OuterVolumeSpecName: "config") pod "e56ff0b4-3551-4388-af57-ed219cde17de" (UID: "e56ff0b4-3551-4388-af57-ed219cde17de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.924713 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f83f2f-9119-42d7-b712-06dc0ef0adfd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04f83f2f-9119-42d7-b712-06dc0ef0adfd" (UID: "04f83f2f-9119-42d7-b712-06dc0ef0adfd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.924730 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56ff0b4-3551-4388-af57-ed219cde17de-kube-api-access-7hdwt" (OuterVolumeSpecName: "kube-api-access-7hdwt") pod "e56ff0b4-3551-4388-af57-ed219cde17de" (UID: "e56ff0b4-3551-4388-af57-ed219cde17de"). InnerVolumeSpecName "kube-api-access-7hdwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.924764 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f83f2f-9119-42d7-b712-06dc0ef0adfd-kube-api-access-gxvb7" (OuterVolumeSpecName: "kube-api-access-gxvb7") pod "04f83f2f-9119-42d7-b712-06dc0ef0adfd" (UID: "04f83f2f-9119-42d7-b712-06dc0ef0adfd"). InnerVolumeSpecName "kube-api-access-gxvb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:06 crc kubenswrapper[4760]: I0123 18:06:06.924970 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e56ff0b4-3551-4388-af57-ed219cde17de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e56ff0b4-3551-4388-af57-ed219cde17de" (UID: "e56ff0b4-3551-4388-af57-ed219cde17de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.019877 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxvb7\" (UniqueName: \"kubernetes.io/projected/04f83f2f-9119-42d7-b712-06dc0ef0adfd-kube-api-access-gxvb7\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.019925 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.019937 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hdwt\" (UniqueName: \"kubernetes.io/projected/e56ff0b4-3551-4388-af57-ed219cde17de-kube-api-access-7hdwt\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.019948 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e56ff0b4-3551-4388-af57-ed219cde17de-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.019963 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.019977 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.019987 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04f83f2f-9119-42d7-b712-06dc0ef0adfd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.019998 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e56ff0b4-3551-4388-af57-ed219cde17de-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.020009 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f83f2f-9119-42d7-b712-06dc0ef0adfd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.368438 4760 generic.go:334] "Generic (PLEG): container finished" podID="04f83f2f-9119-42d7-b712-06dc0ef0adfd" containerID="f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952" exitCode=0 Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.368501 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.368566 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" event={"ID":"04f83f2f-9119-42d7-b712-06dc0ef0adfd","Type":"ContainerDied","Data":"f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952"} Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.368628 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx" event={"ID":"04f83f2f-9119-42d7-b712-06dc0ef0adfd","Type":"ContainerDied","Data":"a01315f657219758f634d84af76766c82169c303598865db8a89d76fa9fab531"} Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.368661 4760 scope.go:117] "RemoveContainer" containerID="f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.373284 4760 generic.go:334] "Generic (PLEG): container finished" podID="e56ff0b4-3551-4388-af57-ed219cde17de" containerID="e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60" exitCode=0 Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.373323 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" event={"ID":"e56ff0b4-3551-4388-af57-ed219cde17de","Type":"ContainerDied","Data":"e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60"} Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.373348 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" event={"ID":"e56ff0b4-3551-4388-af57-ed219cde17de","Type":"ContainerDied","Data":"028e37a2c6ef018d2a7b33375a6bf274d64925e4254baefc76e6ae5a177b0696"} Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.373442 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rv79c" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.393345 4760 scope.go:117] "RemoveContainer" containerID="f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952" Jan 23 18:06:07 crc kubenswrapper[4760]: E0123 18:06:07.393843 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952\": container with ID starting with f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952 not found: ID does not exist" containerID="f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.393902 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952"} err="failed to get container status \"f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952\": rpc error: code = NotFound desc = could not find container \"f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952\": container with ID starting with f2549656ac99ec1bd93567bcd10964c57d63a5bb61f81f94e88a4f46ea011952 not found: ID does not exist" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.393937 4760 scope.go:117] "RemoveContainer" containerID="e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.417628 4760 scope.go:117] "RemoveContainer" containerID="e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60" Jan 23 18:06:07 crc kubenswrapper[4760]: E0123 18:06:07.418349 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60\": container with ID starting with e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60 not found: ID does not exist" containerID="e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.418452 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rv79c"] Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.418449 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60"} err="failed to get container status \"e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60\": rpc error: code = NotFound desc = could not find container \"e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60\": container with ID starting with e76b353bd4a6b9875bdd73c7c67357e3ec3676f3a813d066da9337346ba8fd60 not found: ID does not exist" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.425497 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rv79c"] Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.440810 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx"] Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.444563 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxppx"] Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.594055 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d895f4879-fvsp2"] Jan 23 18:06:07 crc kubenswrapper[4760]: E0123 18:06:07.594307 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56ff0b4-3551-4388-af57-ed219cde17de" containerName="controller-manager" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.594322 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56ff0b4-3551-4388-af57-ed219cde17de" containerName="controller-manager" Jan 23 18:06:07 crc kubenswrapper[4760]: E0123 18:06:07.594334 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f83f2f-9119-42d7-b712-06dc0ef0adfd" containerName="route-controller-manager" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.594342 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f83f2f-9119-42d7-b712-06dc0ef0adfd" containerName="route-controller-manager" Jan 23 18:06:07 crc kubenswrapper[4760]: E0123 18:06:07.594354 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.594365 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.594495 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56ff0b4-3551-4388-af57-ed219cde17de" containerName="controller-manager" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.594511 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f83f2f-9119-42d7-b712-06dc0ef0adfd" containerName="route-controller-manager" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.594529 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.595072 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.616065 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.618685 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.618989 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f83f2f-9119-42d7-b712-06dc0ef0adfd" path="/var/lib/kubelet/pods/04f83f2f-9119-42d7-b712-06dc0ef0adfd/volumes" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.619978 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.620190 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.620296 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.620597 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56ff0b4-3551-4388-af57-ed219cde17de" path="/var/lib/kubelet/pods/e56ff0b4-3551-4388-af57-ed219cde17de/volumes" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.621076 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.622298 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7"] Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.625525 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d895f4879-fvsp2"] Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.625819 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.626844 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7"] Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.628915 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.629828 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.629985 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.630138 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.630235 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.630300 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.630351 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.631640 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a212804-8b2a-4d65-a631-bab28953686f-serving-cert\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.631688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-config\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.631723 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-config\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.631746 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-client-ca\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.631766 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ee245ee-3e05-49fe-82c6-9722128f9fad-serving-cert\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.631785 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mrkd\" (UniqueName: \"kubernetes.io/projected/9a212804-8b2a-4d65-a631-bab28953686f-kube-api-access-7mrkd\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.631902 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-client-ca\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.631928 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmkjs\" (UniqueName: \"kubernetes.io/projected/6ee245ee-3e05-49fe-82c6-9722128f9fad-kube-api-access-lmkjs\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.631967 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-proxy-ca-bundles\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.732945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-proxy-ca-bundles\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.733377 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a212804-8b2a-4d65-a631-bab28953686f-serving-cert\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.733569 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-config\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.733698 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-config\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.733810 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-client-ca\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.733926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ee245ee-3e05-49fe-82c6-9722128f9fad-serving-cert\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.734042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrkd\" (UniqueName: \"kubernetes.io/projected/9a212804-8b2a-4d65-a631-bab28953686f-kube-api-access-7mrkd\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.734288 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-client-ca\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.734492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmkjs\" (UniqueName: \"kubernetes.io/projected/6ee245ee-3e05-49fe-82c6-9722128f9fad-kube-api-access-lmkjs\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.734736 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-client-ca\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.734732 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-proxy-ca-bundles\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.735662 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-config\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.735727 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-config\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.736866 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-client-ca\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.739030 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a212804-8b2a-4d65-a631-bab28953686f-serving-cert\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.740161 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ee245ee-3e05-49fe-82c6-9722128f9fad-serving-cert\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.751016 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mrkd\" (UniqueName: \"kubernetes.io/projected/9a212804-8b2a-4d65-a631-bab28953686f-kube-api-access-7mrkd\") pod \"route-controller-manager-6549bf6889-gp4w7\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.759515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmkjs\" (UniqueName: \"kubernetes.io/projected/6ee245ee-3e05-49fe-82c6-9722128f9fad-kube-api-access-lmkjs\") pod \"controller-manager-5d895f4879-fvsp2\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.931183 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:07 crc kubenswrapper[4760]: I0123 18:06:07.946601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.138132 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d895f4879-fvsp2"] Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.191153 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7"] Jan 23 18:06:08 crc kubenswrapper[4760]: W0123 18:06:08.202755 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a212804_8b2a_4d65_a631_bab28953686f.slice/crio-ffafd5073737bce2a62c1e26c2b40c098c337186b92defc2eaf15959a2644c2e WatchSource:0}: Error finding container ffafd5073737bce2a62c1e26c2b40c098c337186b92defc2eaf15959a2644c2e: Status 404 returned error can't find the container with id ffafd5073737bce2a62c1e26c2b40c098c337186b92defc2eaf15959a2644c2e Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.381528 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" event={"ID":"9a212804-8b2a-4d65-a631-bab28953686f","Type":"ContainerStarted","Data":"e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f"} Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.381591 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" event={"ID":"9a212804-8b2a-4d65-a631-bab28953686f","Type":"ContainerStarted","Data":"ffafd5073737bce2a62c1e26c2b40c098c337186b92defc2eaf15959a2644c2e"} Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.382570 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.384876 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" event={"ID":"6ee245ee-3e05-49fe-82c6-9722128f9fad","Type":"ContainerStarted","Data":"bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b"} Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.385061 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.385169 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" event={"ID":"6ee245ee-3e05-49fe-82c6-9722128f9fad","Type":"ContainerStarted","Data":"08c868adfe0ebeb05f9de31ae29e8bc1978bbe0a1df070afbd93a157ce537385"} Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.389075 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.416090 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" podStartSLOduration=2.416065706 podStartE2EDuration="2.416065706s" podCreationTimestamp="2026-01-23 18:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:06:08.412695869 +0000 UTC m=+311.415153852" watchObservedRunningTime="2026-01-23 18:06:08.416065706 +0000 UTC m=+311.418523649" Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.459877 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" podStartSLOduration=2.459853367 podStartE2EDuration="2.459853367s" podCreationTimestamp="2026-01-23 18:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:06:08.455361558 +0000 UTC m=+311.457819491" watchObservedRunningTime="2026-01-23 18:06:08.459853367 +0000 UTC m=+311.462311300" Jan 23 18:06:08 crc kubenswrapper[4760]: I0123 18:06:08.660716 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:09 crc kubenswrapper[4760]: I0123 18:06:09.814228 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d895f4879-fvsp2"] Jan 23 18:06:09 crc kubenswrapper[4760]: I0123 18:06:09.850357 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7"] Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.402558 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" podUID="6ee245ee-3e05-49fe-82c6-9722128f9fad" containerName="controller-manager" containerID="cri-o://bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b" gracePeriod=30 Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.402731 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" podUID="9a212804-8b2a-4d65-a631-bab28953686f" containerName="route-controller-manager" containerID="cri-o://e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f" gracePeriod=30 Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.885611 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.890587 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.930883 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm"] Jan 23 18:06:11 crc kubenswrapper[4760]: E0123 18:06:11.931061 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a212804-8b2a-4d65-a631-bab28953686f" containerName="route-controller-manager" Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.931072 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a212804-8b2a-4d65-a631-bab28953686f" containerName="route-controller-manager" Jan 23 18:06:11 crc kubenswrapper[4760]: E0123 18:06:11.931089 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee245ee-3e05-49fe-82c6-9722128f9fad" containerName="controller-manager" Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.931096 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee245ee-3e05-49fe-82c6-9722128f9fad" containerName="controller-manager" Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.931189 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a212804-8b2a-4d65-a631-bab28953686f" containerName="route-controller-manager" Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.931203 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee245ee-3e05-49fe-82c6-9722128f9fad" containerName="controller-manager" Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.931530 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:11 crc kubenswrapper[4760]: I0123 18:06:11.975298 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm"] Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086098 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-config\") pod \"6ee245ee-3e05-49fe-82c6-9722128f9fad\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086136 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmkjs\" (UniqueName: \"kubernetes.io/projected/6ee245ee-3e05-49fe-82c6-9722128f9fad-kube-api-access-lmkjs\") pod \"6ee245ee-3e05-49fe-82c6-9722128f9fad\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086175 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-config\") pod \"9a212804-8b2a-4d65-a631-bab28953686f\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086196 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ee245ee-3e05-49fe-82c6-9722128f9fad-serving-cert\") pod \"6ee245ee-3e05-49fe-82c6-9722128f9fad\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086253 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-proxy-ca-bundles\") pod \"6ee245ee-3e05-49fe-82c6-9722128f9fad\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086282 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-client-ca\") pod \"6ee245ee-3e05-49fe-82c6-9722128f9fad\" (UID: \"6ee245ee-3e05-49fe-82c6-9722128f9fad\") " Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086300 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-client-ca\") pod \"9a212804-8b2a-4d65-a631-bab28953686f\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086354 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a212804-8b2a-4d65-a631-bab28953686f-serving-cert\") pod \"9a212804-8b2a-4d65-a631-bab28953686f\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086376 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mrkd\" (UniqueName: \"kubernetes.io/projected/9a212804-8b2a-4d65-a631-bab28953686f-kube-api-access-7mrkd\") pod \"9a212804-8b2a-4d65-a631-bab28953686f\" (UID: \"9a212804-8b2a-4d65-a631-bab28953686f\") " Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/571d4c2b-89b1-4622-a1e1-741208075455-serving-cert\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086572 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-config\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086625 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxdw7\" (UniqueName: \"kubernetes.io/projected/571d4c2b-89b1-4622-a1e1-741208075455-kube-api-access-mxdw7\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086662 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-client-ca\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086853 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ee245ee-3e05-49fe-82c6-9722128f9fad" (UID: "6ee245ee-3e05-49fe-82c6-9722128f9fad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086903 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-config" (OuterVolumeSpecName: "config") pod "9a212804-8b2a-4d65-a631-bab28953686f" (UID: "9a212804-8b2a-4d65-a631-bab28953686f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.086959 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-config" (OuterVolumeSpecName: "config") pod "6ee245ee-3e05-49fe-82c6-9722128f9fad" (UID: "6ee245ee-3e05-49fe-82c6-9722128f9fad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.087062 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-client-ca" (OuterVolumeSpecName: "client-ca") pod "9a212804-8b2a-4d65-a631-bab28953686f" (UID: "9a212804-8b2a-4d65-a631-bab28953686f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.087373 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6ee245ee-3e05-49fe-82c6-9722128f9fad" (UID: "6ee245ee-3e05-49fe-82c6-9722128f9fad"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.091457 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a212804-8b2a-4d65-a631-bab28953686f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9a212804-8b2a-4d65-a631-bab28953686f" (UID: "9a212804-8b2a-4d65-a631-bab28953686f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.092552 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a212804-8b2a-4d65-a631-bab28953686f-kube-api-access-7mrkd" (OuterVolumeSpecName: "kube-api-access-7mrkd") pod "9a212804-8b2a-4d65-a631-bab28953686f" (UID: "9a212804-8b2a-4d65-a631-bab28953686f"). InnerVolumeSpecName "kube-api-access-7mrkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.097614 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee245ee-3e05-49fe-82c6-9722128f9fad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ee245ee-3e05-49fe-82c6-9722128f9fad" (UID: "6ee245ee-3e05-49fe-82c6-9722128f9fad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.107682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee245ee-3e05-49fe-82c6-9722128f9fad-kube-api-access-lmkjs" (OuterVolumeSpecName: "kube-api-access-lmkjs") pod "6ee245ee-3e05-49fe-82c6-9722128f9fad" (UID: "6ee245ee-3e05-49fe-82c6-9722128f9fad"). InnerVolumeSpecName "kube-api-access-lmkjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187306 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxdw7\" (UniqueName: \"kubernetes.io/projected/571d4c2b-89b1-4622-a1e1-741208075455-kube-api-access-mxdw7\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-client-ca\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187718 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/571d4c2b-89b1-4622-a1e1-741208075455-serving-cert\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187744 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-config\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187788 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187802 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187813 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187824 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a212804-8b2a-4d65-a631-bab28953686f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187836 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mrkd\" (UniqueName: \"kubernetes.io/projected/9a212804-8b2a-4d65-a631-bab28953686f-kube-api-access-7mrkd\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187848 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee245ee-3e05-49fe-82c6-9722128f9fad-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187859 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmkjs\" (UniqueName: \"kubernetes.io/projected/6ee245ee-3e05-49fe-82c6-9722128f9fad-kube-api-access-lmkjs\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187870 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a212804-8b2a-4d65-a631-bab28953686f-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.187881 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ee245ee-3e05-49fe-82c6-9722128f9fad-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.188642 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-client-ca\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.188889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-config\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.193601 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/571d4c2b-89b1-4622-a1e1-741208075455-serving-cert\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.212178 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxdw7\" (UniqueName: \"kubernetes.io/projected/571d4c2b-89b1-4622-a1e1-741208075455-kube-api-access-mxdw7\") pod \"route-controller-manager-66d9b996-xdnqm\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.251666 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.418397 4760 generic.go:334] "Generic (PLEG): container finished" podID="9a212804-8b2a-4d65-a631-bab28953686f" containerID="e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f" exitCode=0 Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.418481 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.418520 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" event={"ID":"9a212804-8b2a-4d65-a631-bab28953686f","Type":"ContainerDied","Data":"e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f"} Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.418552 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7" event={"ID":"9a212804-8b2a-4d65-a631-bab28953686f","Type":"ContainerDied","Data":"ffafd5073737bce2a62c1e26c2b40c098c337186b92defc2eaf15959a2644c2e"} Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.418568 4760 scope.go:117] "RemoveContainer" containerID="e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.429587 4760 generic.go:334] "Generic (PLEG): container finished" podID="6ee245ee-3e05-49fe-82c6-9722128f9fad" containerID="bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b" exitCode=0 Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.429634 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" event={"ID":"6ee245ee-3e05-49fe-82c6-9722128f9fad","Type":"ContainerDied","Data":"bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b"} Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.429660 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" event={"ID":"6ee245ee-3e05-49fe-82c6-9722128f9fad","Type":"ContainerDied","Data":"08c868adfe0ebeb05f9de31ae29e8bc1978bbe0a1df070afbd93a157ce537385"} Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.429705 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d895f4879-fvsp2" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.448777 4760 scope.go:117] "RemoveContainer" containerID="e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f" Jan 23 18:06:12 crc kubenswrapper[4760]: E0123 18:06:12.450517 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f\": container with ID starting with e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f not found: ID does not exist" containerID="e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.450590 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f"} err="failed to get container status \"e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f\": rpc error: code = NotFound desc = could not find container \"e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f\": container with ID starting with e7c709135f46bed9c9789920915a3c1a087014653c000adcad0dcd3685298f1f not found: ID does not exist" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.450618 4760 scope.go:117] "RemoveContainer" containerID="bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.475143 4760 scope.go:117] "RemoveContainer" containerID="bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b" Jan 23 18:06:12 crc kubenswrapper[4760]: E0123 18:06:12.477170 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b\": container with ID starting with bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b not found: ID does not exist" containerID="bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.477214 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b"} err="failed to get container status \"bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b\": rpc error: code = NotFound desc = could not find container \"bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b\": container with ID starting with bfcb5038d06b7694271f3d9e5934b9a897809727811e567027cbf0b495651e6b not found: ID does not exist" Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.500013 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7"] Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.505191 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6549bf6889-gp4w7"] Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.509236 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d895f4879-fvsp2"] Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.522469 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d895f4879-fvsp2"] Jan 23 18:06:12 crc kubenswrapper[4760]: I0123 18:06:12.760299 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm"] Jan 23 18:06:13 crc kubenswrapper[4760]: I0123 18:06:13.436611 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" event={"ID":"571d4c2b-89b1-4622-a1e1-741208075455","Type":"ContainerStarted","Data":"58250d864f7b1c3cd0411be786857a06a4619961d1854704a776be4923342662"} Jan 23 18:06:13 crc kubenswrapper[4760]: I0123 18:06:13.436675 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" event={"ID":"571d4c2b-89b1-4622-a1e1-741208075455","Type":"ContainerStarted","Data":"2760521ac7de978719f27df2bc21dd6d5e0ac9c6c964df9b35ed57cfe204b4fc"} Jan 23 18:06:13 crc kubenswrapper[4760]: I0123 18:06:13.436875 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:13 crc kubenswrapper[4760]: I0123 18:06:13.455859 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:06:13 crc kubenswrapper[4760]: I0123 18:06:13.456922 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" podStartSLOduration=3.456901876 podStartE2EDuration="3.456901876s" podCreationTimestamp="2026-01-23 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:06:13.453317413 +0000 UTC m=+316.455775336" watchObservedRunningTime="2026-01-23 18:06:13.456901876 +0000 UTC m=+316.459359809" Jan 23 18:06:13 crc kubenswrapper[4760]: I0123 18:06:13.601466 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee245ee-3e05-49fe-82c6-9722128f9fad" path="/var/lib/kubelet/pods/6ee245ee-3e05-49fe-82c6-9722128f9fad/volumes" Jan 23 18:06:13 crc kubenswrapper[4760]: I0123 18:06:13.601986 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a212804-8b2a-4d65-a631-bab28953686f" path="/var/lib/kubelet/pods/9a212804-8b2a-4d65-a631-bab28953686f/volumes" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.602695 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7r4v8"] Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.604364 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.607142 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7r4v8"] Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.608072 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.608479 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.608491 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.608765 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.609032 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.609168 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.618032 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.725173 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-config\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.725224 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-client-ca\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.725257 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hh84\" (UniqueName: \"kubernetes.io/projected/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-kube-api-access-8hh84\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.725304 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-proxy-ca-bundles\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.725357 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-serving-cert\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.826316 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-config\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.826380 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-client-ca\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.826466 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hh84\" (UniqueName: \"kubernetes.io/projected/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-kube-api-access-8hh84\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.826522 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-proxy-ca-bundles\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.826566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-serving-cert\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.827963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-proxy-ca-bundles\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.828386 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-client-ca\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.829266 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-config\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.832957 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-serving-cert\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.855274 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hh84\" (UniqueName: \"kubernetes.io/projected/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-kube-api-access-8hh84\") pod \"controller-manager-77d68bfdb-7r4v8\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:14 crc kubenswrapper[4760]: I0123 18:06:14.926621 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:15 crc kubenswrapper[4760]: I0123 18:06:15.149599 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7r4v8"] Jan 23 18:06:15 crc kubenswrapper[4760]: I0123 18:06:15.449542 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" event={"ID":"e4ad291b-4a20-4eec-b7f6-e9142c7fc576","Type":"ContainerStarted","Data":"55b7f2f55e8e1fa645df24aadf502e0b3c67edb30852a49ff2c33a508988adba"} Jan 23 18:06:15 crc kubenswrapper[4760]: I0123 18:06:15.449817 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" event={"ID":"e4ad291b-4a20-4eec-b7f6-e9142c7fc576","Type":"ContainerStarted","Data":"b2af0e5fea49fd2a96aaefccd4e1443eba182d2512458ce375dc96d2417b488b"} Jan 23 18:06:15 crc kubenswrapper[4760]: I0123 18:06:15.466628 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" podStartSLOduration=5.466606607 podStartE2EDuration="5.466606607s" podCreationTimestamp="2026-01-23 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:06:15.462881709 +0000 UTC m=+318.465339642" watchObservedRunningTime="2026-01-23 18:06:15.466606607 +0000 UTC m=+318.469064540" Jan 23 18:06:16 crc kubenswrapper[4760]: I0123 18:06:16.456209 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:16 crc kubenswrapper[4760]: I0123 18:06:16.464165 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:26 crc kubenswrapper[4760]: I0123 18:06:26.383946 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7r4v8"] Jan 23 18:06:26 crc kubenswrapper[4760]: I0123 18:06:26.384646 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" podUID="e4ad291b-4a20-4eec-b7f6-e9142c7fc576" containerName="controller-manager" containerID="cri-o://55b7f2f55e8e1fa645df24aadf502e0b3c67edb30852a49ff2c33a508988adba" gracePeriod=30 Jan 23 18:06:26 crc kubenswrapper[4760]: I0123 18:06:26.515319 4760 generic.go:334] "Generic (PLEG): container finished" podID="e4ad291b-4a20-4eec-b7f6-e9142c7fc576" containerID="55b7f2f55e8e1fa645df24aadf502e0b3c67edb30852a49ff2c33a508988adba" exitCode=0 Jan 23 18:06:26 crc kubenswrapper[4760]: I0123 18:06:26.515401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" event={"ID":"e4ad291b-4a20-4eec-b7f6-e9142c7fc576","Type":"ContainerDied","Data":"55b7f2f55e8e1fa645df24aadf502e0b3c67edb30852a49ff2c33a508988adba"} Jan 23 18:06:26 crc kubenswrapper[4760]: I0123 18:06:26.897909 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:26 crc kubenswrapper[4760]: I0123 18:06:26.979658 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hh84\" (UniqueName: \"kubernetes.io/projected/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-kube-api-access-8hh84\") pod \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " Jan 23 18:06:26 crc kubenswrapper[4760]: I0123 18:06:26.979723 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-config\") pod \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " Jan 23 18:06:26 crc kubenswrapper[4760]: I0123 18:06:26.980922 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-config" (OuterVolumeSpecName: "config") pod "e4ad291b-4a20-4eec-b7f6-e9142c7fc576" (UID: "e4ad291b-4a20-4eec-b7f6-e9142c7fc576"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:26 crc kubenswrapper[4760]: I0123 18:06:26.985384 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-kube-api-access-8hh84" (OuterVolumeSpecName: "kube-api-access-8hh84") pod "e4ad291b-4a20-4eec-b7f6-e9142c7fc576" (UID: "e4ad291b-4a20-4eec-b7f6-e9142c7fc576"). InnerVolumeSpecName "kube-api-access-8hh84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.081228 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-serving-cert\") pod \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.081289 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-proxy-ca-bundles\") pod \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.081326 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-client-ca\") pod \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\" (UID: \"e4ad291b-4a20-4eec-b7f6-e9142c7fc576\") " Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.082317 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e4ad291b-4a20-4eec-b7f6-e9142c7fc576" (UID: "e4ad291b-4a20-4eec-b7f6-e9142c7fc576"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.082433 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-client-ca" (OuterVolumeSpecName: "client-ca") pod "e4ad291b-4a20-4eec-b7f6-e9142c7fc576" (UID: "e4ad291b-4a20-4eec-b7f6-e9142c7fc576"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.082848 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.082878 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.082892 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hh84\" (UniqueName: \"kubernetes.io/projected/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-kube-api-access-8hh84\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.082906 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.085009 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e4ad291b-4a20-4eec-b7f6-e9142c7fc576" (UID: "e4ad291b-4a20-4eec-b7f6-e9142c7fc576"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.183922 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ad291b-4a20-4eec-b7f6-e9142c7fc576-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.522560 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" event={"ID":"e4ad291b-4a20-4eec-b7f6-e9142c7fc576","Type":"ContainerDied","Data":"b2af0e5fea49fd2a96aaefccd4e1443eba182d2512458ce375dc96d2417b488b"} Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.522604 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-7r4v8" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.522613 4760 scope.go:117] "RemoveContainer" containerID="55b7f2f55e8e1fa645df24aadf502e0b3c67edb30852a49ff2c33a508988adba" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.553581 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7r4v8"] Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.555463 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-7r4v8"] Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.603162 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ad291b-4a20-4eec-b7f6-e9142c7fc576" path="/var/lib/kubelet/pods/e4ad291b-4a20-4eec-b7f6-e9142c7fc576/volumes" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.609121 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b68c89547-h4v9g"] Jan 23 18:06:27 crc kubenswrapper[4760]: E0123 18:06:27.609334 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ad291b-4a20-4eec-b7f6-e9142c7fc576" containerName="controller-manager" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.609345 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ad291b-4a20-4eec-b7f6-e9142c7fc576" containerName="controller-manager" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.609450 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ad291b-4a20-4eec-b7f6-e9142c7fc576" containerName="controller-manager" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.609836 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.615704 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.616079 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.616179 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.616694 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.616787 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.616800 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.622378 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b68c89547-h4v9g"] Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.623130 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.788764 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5416f709-e624-4cdd-b212-36ecbf0ac0a6-serving-cert\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.788824 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5416f709-e624-4cdd-b212-36ecbf0ac0a6-config\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.788988 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5416f709-e624-4cdd-b212-36ecbf0ac0a6-client-ca\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.789025 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2flt5\" (UniqueName: \"kubernetes.io/projected/5416f709-e624-4cdd-b212-36ecbf0ac0a6-kube-api-access-2flt5\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.789087 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5416f709-e624-4cdd-b212-36ecbf0ac0a6-proxy-ca-bundles\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.889858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5416f709-e624-4cdd-b212-36ecbf0ac0a6-proxy-ca-bundles\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.889921 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5416f709-e624-4cdd-b212-36ecbf0ac0a6-serving-cert\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.889946 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5416f709-e624-4cdd-b212-36ecbf0ac0a6-config\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.890020 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5416f709-e624-4cdd-b212-36ecbf0ac0a6-client-ca\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.890044 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2flt5\" (UniqueName: \"kubernetes.io/projected/5416f709-e624-4cdd-b212-36ecbf0ac0a6-kube-api-access-2flt5\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.891069 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5416f709-e624-4cdd-b212-36ecbf0ac0a6-proxy-ca-bundles\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.891279 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5416f709-e624-4cdd-b212-36ecbf0ac0a6-client-ca\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.891721 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5416f709-e624-4cdd-b212-36ecbf0ac0a6-config\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.898306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5416f709-e624-4cdd-b212-36ecbf0ac0a6-serving-cert\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.905587 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2flt5\" (UniqueName: \"kubernetes.io/projected/5416f709-e624-4cdd-b212-36ecbf0ac0a6-kube-api-access-2flt5\") pod \"controller-manager-7b68c89547-h4v9g\" (UID: \"5416f709-e624-4cdd-b212-36ecbf0ac0a6\") " pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:27 crc kubenswrapper[4760]: I0123 18:06:27.928673 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:28 crc kubenswrapper[4760]: I0123 18:06:28.326391 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b68c89547-h4v9g"] Jan 23 18:06:28 crc kubenswrapper[4760]: I0123 18:06:28.530319 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" event={"ID":"5416f709-e624-4cdd-b212-36ecbf0ac0a6","Type":"ContainerStarted","Data":"f2be9257672c027317a2de34ef3e8832c2fac3e3e86b70bf62ebe261f7075531"} Jan 23 18:06:28 crc kubenswrapper[4760]: I0123 18:06:28.530760 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:28 crc kubenswrapper[4760]: I0123 18:06:28.530836 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" event={"ID":"5416f709-e624-4cdd-b212-36ecbf0ac0a6","Type":"ContainerStarted","Data":"bd5bad8c3de282bad9635be83da05d19432d164e5f3eb667429681352fda3957"} Jan 23 18:06:28 crc kubenswrapper[4760]: I0123 18:06:28.536142 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" Jan 23 18:06:28 crc kubenswrapper[4760]: I0123 18:06:28.553981 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b68c89547-h4v9g" podStartSLOduration=2.553962086 podStartE2EDuration="2.553962086s" podCreationTimestamp="2026-01-23 18:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:06:28.549720402 +0000 UTC m=+331.552178355" watchObservedRunningTime="2026-01-23 18:06:28.553962086 +0000 UTC m=+331.556420019" Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.850954 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fhzrk"] Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.851737 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fhzrk" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" containerName="registry-server" containerID="cri-o://28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7" gracePeriod=30 Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.878131 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8kgm"] Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.878463 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x8kgm" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerName="registry-server" containerID="cri-o://de215f33a438c51a3fff03023471da4b43ffc29ae47ac729a6fee59a76986164" gracePeriod=30 Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.885784 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49s9b"] Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.886045 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" podUID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerName="marketplace-operator" containerID="cri-o://8c9538091a2de3236d2bd0f5f6c79121c26789ad0030e58da3254a710b9ee670" gracePeriod=30 Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.889966 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4td8g"] Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.890385 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4td8g" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerName="registry-server" containerID="cri-o://5dffd14f760d0967dc6f8886cdd0178b7c94e3ab67c09f543c762056e396e2bc" gracePeriod=30 Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.907152 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m4kdb"] Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.908983 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m4kdb" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerName="registry-server" containerID="cri-o://a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79" gracePeriod=30 Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.917934 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2llpp"] Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.918687 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:44 crc kubenswrapper[4760]: I0123 18:06:44.931588 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2llpp"] Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.106854 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dc5cdf6-ad52-4c3b-a100-08709b3e06c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2llpp\" (UID: \"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.107151 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckcr\" (UniqueName: \"kubernetes.io/projected/7dc5cdf6-ad52-4c3b-a100-08709b3e06c6-kube-api-access-6ckcr\") pod \"marketplace-operator-79b997595-2llpp\" (UID: \"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.107188 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7dc5cdf6-ad52-4c3b-a100-08709b3e06c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2llpp\" (UID: \"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.208137 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dc5cdf6-ad52-4c3b-a100-08709b3e06c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2llpp\" (UID: \"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.208197 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckcr\" (UniqueName: \"kubernetes.io/projected/7dc5cdf6-ad52-4c3b-a100-08709b3e06c6-kube-api-access-6ckcr\") pod \"marketplace-operator-79b997595-2llpp\" (UID: \"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.208231 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7dc5cdf6-ad52-4c3b-a100-08709b3e06c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2llpp\" (UID: \"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.211476 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dc5cdf6-ad52-4c3b-a100-08709b3e06c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2llpp\" (UID: \"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:45 crc kubenswrapper[4760]: E0123 18:06:45.216553 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79 is running failed: container process not found" containerID="a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 18:06:45 crc kubenswrapper[4760]: E0123 18:06:45.217260 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79 is running failed: container process not found" containerID="a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 18:06:45 crc kubenswrapper[4760]: E0123 18:06:45.217843 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79 is running failed: container process not found" containerID="a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 18:06:45 crc kubenswrapper[4760]: E0123 18:06:45.217884 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-m4kdb" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerName="registry-server" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.219136 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7dc5cdf6-ad52-4c3b-a100-08709b3e06c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2llpp\" (UID: \"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.235759 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckcr\" (UniqueName: \"kubernetes.io/projected/7dc5cdf6-ad52-4c3b-a100-08709b3e06c6-kube-api-access-6ckcr\") pod \"marketplace-operator-79b997595-2llpp\" (UID: \"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.311820 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.404287 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.512589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-catalog-content\") pod \"036e8482-197b-4a5f-b33a-792ca966a04b\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.512662 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qh2l\" (UniqueName: \"kubernetes.io/projected/036e8482-197b-4a5f-b33a-792ca966a04b-kube-api-access-8qh2l\") pod \"036e8482-197b-4a5f-b33a-792ca966a04b\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.512691 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-utilities\") pod \"036e8482-197b-4a5f-b33a-792ca966a04b\" (UID: \"036e8482-197b-4a5f-b33a-792ca966a04b\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.513560 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-utilities" (OuterVolumeSpecName: "utilities") pod "036e8482-197b-4a5f-b33a-792ca966a04b" (UID: "036e8482-197b-4a5f-b33a-792ca966a04b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.519604 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/036e8482-197b-4a5f-b33a-792ca966a04b-kube-api-access-8qh2l" (OuterVolumeSpecName: "kube-api-access-8qh2l") pod "036e8482-197b-4a5f-b33a-792ca966a04b" (UID: "036e8482-197b-4a5f-b33a-792ca966a04b"). InnerVolumeSpecName "kube-api-access-8qh2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.618293 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qh2l\" (UniqueName: \"kubernetes.io/projected/036e8482-197b-4a5f-b33a-792ca966a04b-kube-api-access-8qh2l\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.618329 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.634496 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "036e8482-197b-4a5f-b33a-792ca966a04b" (UID: "036e8482-197b-4a5f-b33a-792ca966a04b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.655793 4760 generic.go:334] "Generic (PLEG): container finished" podID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerID="8c9538091a2de3236d2bd0f5f6c79121c26789ad0030e58da3254a710b9ee670" exitCode=0 Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.655856 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" event={"ID":"7106b645-deaf-47b1-9d00-5050fdd7b040","Type":"ContainerDied","Data":"8c9538091a2de3236d2bd0f5f6c79121c26789ad0030e58da3254a710b9ee670"} Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.656106 4760 scope.go:117] "RemoveContainer" containerID="62984e774ea701ec3386ea7348b41ad61c2075c6f755e60ac7e1f5868efb4523" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.658357 4760 generic.go:334] "Generic (PLEG): container finished" podID="036e8482-197b-4a5f-b33a-792ca966a04b" containerID="28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7" exitCode=0 Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.658422 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhzrk" event={"ID":"036e8482-197b-4a5f-b33a-792ca966a04b","Type":"ContainerDied","Data":"28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7"} Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.658445 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhzrk" event={"ID":"036e8482-197b-4a5f-b33a-792ca966a04b","Type":"ContainerDied","Data":"2770086b2cc20b6b1ad8fa4d087def395e147ba060f7bf33322aff5ee4b9927c"} Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.658463 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhzrk" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.659011 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.660645 4760 generic.go:334] "Generic (PLEG): container finished" podID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerID="a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79" exitCode=0 Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.660710 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4kdb" event={"ID":"dc9cf1ca-5861-414c-a7ab-22380486fd2b","Type":"ContainerDied","Data":"a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79"} Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.663864 4760 generic.go:334] "Generic (PLEG): container finished" podID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerID="de215f33a438c51a3fff03023471da4b43ffc29ae47ac729a6fee59a76986164" exitCode=0 Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.663946 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8kgm" event={"ID":"550d2598-58ad-4e85-acd9-0bd0c945703e","Type":"ContainerDied","Data":"de215f33a438c51a3fff03023471da4b43ffc29ae47ac729a6fee59a76986164"} Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.666259 4760 generic.go:334] "Generic (PLEG): container finished" podID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerID="5dffd14f760d0967dc6f8886cdd0178b7c94e3ab67c09f543c762056e396e2bc" exitCode=0 Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.666294 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4td8g" event={"ID":"861bf30c-c95a-42cf-9ced-f5cfbb2265c5","Type":"ContainerDied","Data":"5dffd14f760d0967dc6f8886cdd0178b7c94e3ab67c09f543c762056e396e2bc"} Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.697902 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.710007 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.718747 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fhzrk"] Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.720047 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/036e8482-197b-4a5f-b33a-792ca966a04b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.721891 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fhzrk"] Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.739476 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.740777 4760 scope.go:117] "RemoveContainer" containerID="28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.761109 4760 scope.go:117] "RemoveContainer" containerID="d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.778250 4760 scope.go:117] "RemoveContainer" containerID="5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.803483 4760 scope.go:117] "RemoveContainer" containerID="28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7" Jan 23 18:06:45 crc kubenswrapper[4760]: E0123 18:06:45.804029 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7\": container with ID starting with 28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7 not found: ID does not exist" containerID="28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.804074 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7"} err="failed to get container status \"28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7\": rpc error: code = NotFound desc = could not find container \"28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7\": container with ID starting with 28508a2bc02b3461aa6e267e30ac8ce2f77cb8d068b7e00699dfb95cbc13e1c7 not found: ID does not exist" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.804110 4760 scope.go:117] "RemoveContainer" containerID="d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329" Jan 23 18:06:45 crc kubenswrapper[4760]: E0123 18:06:45.804542 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329\": container with ID starting with d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329 not found: ID does not exist" containerID="d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.804560 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329"} err="failed to get container status \"d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329\": rpc error: code = NotFound desc = could not find container \"d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329\": container with ID starting with d1f3c21811c0992bb972cae7ea770b84fe135df8cd63b0052cf4660ded10c329 not found: ID does not exist" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.804573 4760 scope.go:117] "RemoveContainer" containerID="5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab" Jan 23 18:06:45 crc kubenswrapper[4760]: E0123 18:06:45.804869 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab\": container with ID starting with 5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab not found: ID does not exist" containerID="5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.804901 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab"} err="failed to get container status \"5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab\": rpc error: code = NotFound desc = could not find container \"5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab\": container with ID starting with 5ac08b1e2024b188a16f52b81db171f9a65fc3780403c4a9c4865651904330ab not found: ID does not exist" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.804927 4760 scope.go:117] "RemoveContainer" containerID="a170f4db59ce5f12660bbfc1a378afb553da8793bdb417d89be4c7c513cb2c79" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.820878 4760 scope.go:117] "RemoveContainer" containerID="fc97dbc07c3c398803a64cb7ac437dba1407c4514fbcfb3850b8488958ef00b4" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821126 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-utilities\") pod \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821198 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-trusted-ca\") pod \"7106b645-deaf-47b1-9d00-5050fdd7b040\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821235 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qskt\" (UniqueName: \"kubernetes.io/projected/dc9cf1ca-5861-414c-a7ab-22380486fd2b-kube-api-access-2qskt\") pod \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821266 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-operator-metrics\") pod \"7106b645-deaf-47b1-9d00-5050fdd7b040\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821293 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-catalog-content\") pod \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821318 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-utilities\") pod \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\" (UID: \"dc9cf1ca-5861-414c-a7ab-22380486fd2b\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821358 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-catalog-content\") pod \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821386 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9z86\" (UniqueName: \"kubernetes.io/projected/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-kube-api-access-s9z86\") pod \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\" (UID: \"861bf30c-c95a-42cf-9ced-f5cfbb2265c5\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821435 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9bjt\" (UniqueName: \"kubernetes.io/projected/7106b645-deaf-47b1-9d00-5050fdd7b040-kube-api-access-c9bjt\") pod \"7106b645-deaf-47b1-9d00-5050fdd7b040\" (UID: \"7106b645-deaf-47b1-9d00-5050fdd7b040\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821942 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7106b645-deaf-47b1-9d00-5050fdd7b040" (UID: "7106b645-deaf-47b1-9d00-5050fdd7b040"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.821952 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-utilities" (OuterVolumeSpecName: "utilities") pod "861bf30c-c95a-42cf-9ced-f5cfbb2265c5" (UID: "861bf30c-c95a-42cf-9ced-f5cfbb2265c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.825784 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-kube-api-access-s9z86" (OuterVolumeSpecName: "kube-api-access-s9z86") pod "861bf30c-c95a-42cf-9ced-f5cfbb2265c5" (UID: "861bf30c-c95a-42cf-9ced-f5cfbb2265c5"). InnerVolumeSpecName "kube-api-access-s9z86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.826079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7106b645-deaf-47b1-9d00-5050fdd7b040" (UID: "7106b645-deaf-47b1-9d00-5050fdd7b040"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.826275 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9cf1ca-5861-414c-a7ab-22380486fd2b-kube-api-access-2qskt" (OuterVolumeSpecName: "kube-api-access-2qskt") pod "dc9cf1ca-5861-414c-a7ab-22380486fd2b" (UID: "dc9cf1ca-5861-414c-a7ab-22380486fd2b"). InnerVolumeSpecName "kube-api-access-2qskt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.826707 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7106b645-deaf-47b1-9d00-5050fdd7b040-kube-api-access-c9bjt" (OuterVolumeSpecName: "kube-api-access-c9bjt") pod "7106b645-deaf-47b1-9d00-5050fdd7b040" (UID: "7106b645-deaf-47b1-9d00-5050fdd7b040"). InnerVolumeSpecName "kube-api-access-c9bjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.834665 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-utilities" (OuterVolumeSpecName: "utilities") pod "dc9cf1ca-5861-414c-a7ab-22380486fd2b" (UID: "dc9cf1ca-5861-414c-a7ab-22380486fd2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.839689 4760 scope.go:117] "RemoveContainer" containerID="23c2ddeaae5c38d409d34365a2eaa0392b9f3d0ff69e5941c707b887d65fa71d" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.845291 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "861bf30c-c95a-42cf-9ced-f5cfbb2265c5" (UID: "861bf30c-c95a-42cf-9ced-f5cfbb2265c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.922883 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxfzs\" (UniqueName: \"kubernetes.io/projected/550d2598-58ad-4e85-acd9-0bd0c945703e-kube-api-access-rxfzs\") pod \"550d2598-58ad-4e85-acd9-0bd0c945703e\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.923038 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-utilities\") pod \"550d2598-58ad-4e85-acd9-0bd0c945703e\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.923098 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-catalog-content\") pod \"550d2598-58ad-4e85-acd9-0bd0c945703e\" (UID: \"550d2598-58ad-4e85-acd9-0bd0c945703e\") " Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.923295 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.923312 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.923323 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9z86\" (UniqueName: \"kubernetes.io/projected/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-kube-api-access-s9z86\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.923332 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9bjt\" (UniqueName: \"kubernetes.io/projected/7106b645-deaf-47b1-9d00-5050fdd7b040-kube-api-access-c9bjt\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.923341 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861bf30c-c95a-42cf-9ced-f5cfbb2265c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.923360 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.923375 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qskt\" (UniqueName: \"kubernetes.io/projected/dc9cf1ca-5861-414c-a7ab-22380486fd2b-kube-api-access-2qskt\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.923386 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7106b645-deaf-47b1-9d00-5050fdd7b040-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.924668 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-utilities" (OuterVolumeSpecName: "utilities") pod "550d2598-58ad-4e85-acd9-0bd0c945703e" (UID: "550d2598-58ad-4e85-acd9-0bd0c945703e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.926635 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550d2598-58ad-4e85-acd9-0bd0c945703e-kube-api-access-rxfzs" (OuterVolumeSpecName: "kube-api-access-rxfzs") pod "550d2598-58ad-4e85-acd9-0bd0c945703e" (UID: "550d2598-58ad-4e85-acd9-0bd0c945703e"). InnerVolumeSpecName "kube-api-access-rxfzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.985912 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2llpp"] Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.992475 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc9cf1ca-5861-414c-a7ab-22380486fd2b" (UID: "dc9cf1ca-5861-414c-a7ab-22380486fd2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:06:45 crc kubenswrapper[4760]: I0123 18:06:45.992610 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "550d2598-58ad-4e85-acd9-0bd0c945703e" (UID: "550d2598-58ad-4e85-acd9-0bd0c945703e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.025021 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxfzs\" (UniqueName: \"kubernetes.io/projected/550d2598-58ad-4e85-acd9-0bd0c945703e-kube-api-access-rxfzs\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.025057 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9cf1ca-5861-414c-a7ab-22380486fd2b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.025071 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.025084 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550d2598-58ad-4e85-acd9-0bd0c945703e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.673061 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4kdb" event={"ID":"dc9cf1ca-5861-414c-a7ab-22380486fd2b","Type":"ContainerDied","Data":"85ce4aab9491e7a5cfdec3a88eab3a2239730d37dcc11c77d6181e177e8c7370"} Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.673086 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4kdb" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.676765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8kgm" event={"ID":"550d2598-58ad-4e85-acd9-0bd0c945703e","Type":"ContainerDied","Data":"45509015f3d486513e7f3c15e76b512613259c0d8819343cae6ee8b0fe55498f"} Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.676815 4760 scope.go:117] "RemoveContainer" containerID="de215f33a438c51a3fff03023471da4b43ffc29ae47ac729a6fee59a76986164" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.676838 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8kgm" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.679587 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4td8g" event={"ID":"861bf30c-c95a-42cf-9ced-f5cfbb2265c5","Type":"ContainerDied","Data":"5951f6baf424c7ac7933115cbed6fbae0438a0aa7856a56dc94a4a38f2f98de5"} Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.679595 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4td8g" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.682011 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" event={"ID":"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6","Type":"ContainerStarted","Data":"fc067e4648c4ce7275f0013030d53bb90bb94b2b1831a3166c8c8a14ed8b12df"} Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.682926 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.683362 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" event={"ID":"7dc5cdf6-ad52-4c3b-a100-08709b3e06c6","Type":"ContainerStarted","Data":"bd5868e1208be051ce5b233a0b90dcdfec70d0fbd0578919348ad5cc8750f3f1"} Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.683491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" event={"ID":"7106b645-deaf-47b1-9d00-5050fdd7b040","Type":"ContainerDied","Data":"404faa543b724e4f656cb3d826fc46dedf85b1aafed3bdb256059a9b36e73630"} Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.683548 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-49s9b" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.686139 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.699395 4760 scope.go:117] "RemoveContainer" containerID="b3ed716d8512f68a637f6e3c15dc6fbca65377bd0f8731ed6b73bf52e551f163" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.715880 4760 scope.go:117] "RemoveContainer" containerID="15d7ac63c600b1f0b3dd6446cb2f2d519ce69ff181ae7e3364be2013dcdf0957" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.731545 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2llpp" podStartSLOduration=2.731530728 podStartE2EDuration="2.731530728s" podCreationTimestamp="2026-01-23 18:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:06:46.729319348 +0000 UTC m=+349.731777281" watchObservedRunningTime="2026-01-23 18:06:46.731530728 +0000 UTC m=+349.733988661" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.741062 4760 scope.go:117] "RemoveContainer" containerID="5dffd14f760d0967dc6f8886cdd0178b7c94e3ab67c09f543c762056e396e2bc" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.747133 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49s9b"] Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.751931 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49s9b"] Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.759783 4760 scope.go:117] "RemoveContainer" containerID="470265aa30e1084d941f0d3868a12f74b551b254e3c7a1cc0f267fb25abfe2cf" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.786553 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4td8g"] Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.791887 4760 scope.go:117] "RemoveContainer" containerID="eab1ec1f41fa88034240ce55df1314c14b4e3abff2781f00b75972e146448e4c" Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.797071 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4td8g"] Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.806242 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8kgm"] Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.809524 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x8kgm"] Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.812351 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m4kdb"] Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.814883 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m4kdb"] Jan 23 18:06:46 crc kubenswrapper[4760]: I0123 18:06:46.820030 4760 scope.go:117] "RemoveContainer" containerID="8c9538091a2de3236d2bd0f5f6c79121c26789ad0030e58da3254a710b9ee670" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.264738 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2mmwq"] Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.264912 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.264923 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.264931 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerName="extract-utilities" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.264937 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerName="extract-utilities" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.264946 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerName="extract-content" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.264951 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerName="extract-content" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.264959 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerName="extract-content" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.264965 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerName="extract-content" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.264974 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerName="marketplace-operator" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.264980 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerName="marketplace-operator" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.264988 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.264993 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.264999 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" containerName="extract-utilities" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265006 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" containerName="extract-utilities" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.265012 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerName="marketplace-operator" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265017 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerName="marketplace-operator" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.265024 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerName="extract-utilities" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265029 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerName="extract-utilities" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.265037 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerName="extract-content" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265042 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerName="extract-content" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.265049 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" containerName="extract-content" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265054 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" containerName="extract-content" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.265065 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265071 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.265079 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerName="extract-utilities" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265084 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerName="extract-utilities" Jan 23 18:06:47 crc kubenswrapper[4760]: E0123 18:06:47.265092 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265098 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265171 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265180 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265189 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265196 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerName="marketplace-operator" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265205 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7106b645-deaf-47b1-9d00-5050fdd7b040" containerName="marketplace-operator" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265215 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" containerName="registry-server" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.265809 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.268763 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.275283 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2mmwq"] Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.441435 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1cac32-4085-41bd-83a1-f3488a2ca17f-catalog-content\") pod \"certified-operators-2mmwq\" (UID: \"6b1cac32-4085-41bd-83a1-f3488a2ca17f\") " pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.441504 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwgm8\" (UniqueName: \"kubernetes.io/projected/6b1cac32-4085-41bd-83a1-f3488a2ca17f-kube-api-access-vwgm8\") pod \"certified-operators-2mmwq\" (UID: \"6b1cac32-4085-41bd-83a1-f3488a2ca17f\") " pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.441665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1cac32-4085-41bd-83a1-f3488a2ca17f-utilities\") pod \"certified-operators-2mmwq\" (UID: \"6b1cac32-4085-41bd-83a1-f3488a2ca17f\") " pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.543573 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1cac32-4085-41bd-83a1-f3488a2ca17f-catalog-content\") pod \"certified-operators-2mmwq\" (UID: \"6b1cac32-4085-41bd-83a1-f3488a2ca17f\") " pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.544181 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwgm8\" (UniqueName: \"kubernetes.io/projected/6b1cac32-4085-41bd-83a1-f3488a2ca17f-kube-api-access-vwgm8\") pod \"certified-operators-2mmwq\" (UID: \"6b1cac32-4085-41bd-83a1-f3488a2ca17f\") " pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.544326 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1cac32-4085-41bd-83a1-f3488a2ca17f-utilities\") pod \"certified-operators-2mmwq\" (UID: \"6b1cac32-4085-41bd-83a1-f3488a2ca17f\") " pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.543970 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1cac32-4085-41bd-83a1-f3488a2ca17f-catalog-content\") pod \"certified-operators-2mmwq\" (UID: \"6b1cac32-4085-41bd-83a1-f3488a2ca17f\") " pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.544875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1cac32-4085-41bd-83a1-f3488a2ca17f-utilities\") pod \"certified-operators-2mmwq\" (UID: \"6b1cac32-4085-41bd-83a1-f3488a2ca17f\") " pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.567367 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwgm8\" (UniqueName: \"kubernetes.io/projected/6b1cac32-4085-41bd-83a1-f3488a2ca17f-kube-api-access-vwgm8\") pod \"certified-operators-2mmwq\" (UID: \"6b1cac32-4085-41bd-83a1-f3488a2ca17f\") " pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.584521 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.621025 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="036e8482-197b-4a5f-b33a-792ca966a04b" path="/var/lib/kubelet/pods/036e8482-197b-4a5f-b33a-792ca966a04b/volumes" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.622158 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550d2598-58ad-4e85-acd9-0bd0c945703e" path="/var/lib/kubelet/pods/550d2598-58ad-4e85-acd9-0bd0c945703e/volumes" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.624115 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7106b645-deaf-47b1-9d00-5050fdd7b040" path="/var/lib/kubelet/pods/7106b645-deaf-47b1-9d00-5050fdd7b040/volumes" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.625609 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="861bf30c-c95a-42cf-9ced-f5cfbb2265c5" path="/var/lib/kubelet/pods/861bf30c-c95a-42cf-9ced-f5cfbb2265c5/volumes" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.630132 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9cf1ca-5861-414c-a7ab-22380486fd2b" path="/var/lib/kubelet/pods/dc9cf1ca-5861-414c-a7ab-22380486fd2b/volumes" Jan 23 18:06:47 crc kubenswrapper[4760]: I0123 18:06:47.956394 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2mmwq"] Jan 23 18:06:47 crc kubenswrapper[4760]: W0123 18:06:47.965021 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b1cac32_4085_41bd_83a1_f3488a2ca17f.slice/crio-b940221a7385f28cf8af74cb6dfef46e0a821679c0e136f20495f03348ca373a WatchSource:0}: Error finding container b940221a7385f28cf8af74cb6dfef46e0a821679c0e136f20495f03348ca373a: Status 404 returned error can't find the container with id b940221a7385f28cf8af74cb6dfef46e0a821679c0e136f20495f03348ca373a Jan 23 18:06:48 crc kubenswrapper[4760]: I0123 18:06:48.701571 4760 generic.go:334] "Generic (PLEG): container finished" podID="6b1cac32-4085-41bd-83a1-f3488a2ca17f" containerID="c26051e33cd2daa4bcd5c163995d088c57120f11d6827009b3200f9b65dfb154" exitCode=0 Jan 23 18:06:48 crc kubenswrapper[4760]: I0123 18:06:48.701709 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mmwq" event={"ID":"6b1cac32-4085-41bd-83a1-f3488a2ca17f","Type":"ContainerDied","Data":"c26051e33cd2daa4bcd5c163995d088c57120f11d6827009b3200f9b65dfb154"} Jan 23 18:06:48 crc kubenswrapper[4760]: I0123 18:06:48.701787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mmwq" event={"ID":"6b1cac32-4085-41bd-83a1-f3488a2ca17f","Type":"ContainerStarted","Data":"b940221a7385f28cf8af74cb6dfef46e0a821679c0e136f20495f03348ca373a"} Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.076639 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-94f29"] Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.077942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.082151 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.089244 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94f29"] Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.164279 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kpx4\" (UniqueName: \"kubernetes.io/projected/d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c-kube-api-access-4kpx4\") pod \"redhat-operators-94f29\" (UID: \"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c\") " pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.164528 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c-utilities\") pod \"redhat-operators-94f29\" (UID: \"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c\") " pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.164683 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c-catalog-content\") pod \"redhat-operators-94f29\" (UID: \"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c\") " pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.265711 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c-utilities\") pod \"redhat-operators-94f29\" (UID: \"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c\") " pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.265815 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c-catalog-content\") pod \"redhat-operators-94f29\" (UID: \"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c\") " pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.265886 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kpx4\" (UniqueName: \"kubernetes.io/projected/d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c-kube-api-access-4kpx4\") pod \"redhat-operators-94f29\" (UID: \"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c\") " pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.266278 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c-utilities\") pod \"redhat-operators-94f29\" (UID: \"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c\") " pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.266360 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c-catalog-content\") pod \"redhat-operators-94f29\" (UID: \"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c\") " pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.290022 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kpx4\" (UniqueName: \"kubernetes.io/projected/d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c-kube-api-access-4kpx4\") pod \"redhat-operators-94f29\" (UID: \"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c\") " pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.403913 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.665992 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hn5nq"] Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.668376 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.671446 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn5nq"] Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.672290 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.707485 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mmwq" event={"ID":"6b1cac32-4085-41bd-83a1-f3488a2ca17f","Type":"ContainerStarted","Data":"ce615b1f02e0e60cc33905c4d54c828a425b97997cbd3aaff89feace88e2f913"} Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.770061 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjztg\" (UniqueName: \"kubernetes.io/projected/8e461375-5185-45d5-9abe-89c57c170d0c-kube-api-access-sjztg\") pod \"community-operators-hn5nq\" (UID: \"8e461375-5185-45d5-9abe-89c57c170d0c\") " pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.770120 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e461375-5185-45d5-9abe-89c57c170d0c-utilities\") pod \"community-operators-hn5nq\" (UID: \"8e461375-5185-45d5-9abe-89c57c170d0c\") " pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.770154 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e461375-5185-45d5-9abe-89c57c170d0c-catalog-content\") pod \"community-operators-hn5nq\" (UID: \"8e461375-5185-45d5-9abe-89c57c170d0c\") " pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.774083 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94f29"] Jan 23 18:06:49 crc kubenswrapper[4760]: W0123 18:06:49.805058 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8c774d8_e0cd_4c3d_949d_dfdec7b1d31c.slice/crio-592daa1c5119e7bc94dfd85b7565d1d8a36e624def61b89a7983c3b3b4906484 WatchSource:0}: Error finding container 592daa1c5119e7bc94dfd85b7565d1d8a36e624def61b89a7983c3b3b4906484: Status 404 returned error can't find the container with id 592daa1c5119e7bc94dfd85b7565d1d8a36e624def61b89a7983c3b3b4906484 Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.874016 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjztg\" (UniqueName: \"kubernetes.io/projected/8e461375-5185-45d5-9abe-89c57c170d0c-kube-api-access-sjztg\") pod \"community-operators-hn5nq\" (UID: \"8e461375-5185-45d5-9abe-89c57c170d0c\") " pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.874130 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e461375-5185-45d5-9abe-89c57c170d0c-utilities\") pod \"community-operators-hn5nq\" (UID: \"8e461375-5185-45d5-9abe-89c57c170d0c\") " pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.874183 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e461375-5185-45d5-9abe-89c57c170d0c-catalog-content\") pod \"community-operators-hn5nq\" (UID: \"8e461375-5185-45d5-9abe-89c57c170d0c\") " pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.875844 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e461375-5185-45d5-9abe-89c57c170d0c-utilities\") pod \"community-operators-hn5nq\" (UID: \"8e461375-5185-45d5-9abe-89c57c170d0c\") " pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.875871 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e461375-5185-45d5-9abe-89c57c170d0c-catalog-content\") pod \"community-operators-hn5nq\" (UID: \"8e461375-5185-45d5-9abe-89c57c170d0c\") " pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.893329 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjztg\" (UniqueName: \"kubernetes.io/projected/8e461375-5185-45d5-9abe-89c57c170d0c-kube-api-access-sjztg\") pod \"community-operators-hn5nq\" (UID: \"8e461375-5185-45d5-9abe-89c57c170d0c\") " pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:49 crc kubenswrapper[4760]: I0123 18:06:49.984887 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:50 crc kubenswrapper[4760]: I0123 18:06:50.345418 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn5nq"] Jan 23 18:06:50 crc kubenswrapper[4760]: W0123 18:06:50.348088 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e461375_5185_45d5_9abe_89c57c170d0c.slice/crio-6d034a290462d161f60fcfca8c73c186f67b3a3841690363562b66f95342c817 WatchSource:0}: Error finding container 6d034a290462d161f60fcfca8c73c186f67b3a3841690363562b66f95342c817: Status 404 returned error can't find the container with id 6d034a290462d161f60fcfca8c73c186f67b3a3841690363562b66f95342c817 Jan 23 18:06:50 crc kubenswrapper[4760]: I0123 18:06:50.712810 4760 generic.go:334] "Generic (PLEG): container finished" podID="8e461375-5185-45d5-9abe-89c57c170d0c" containerID="1db37d70660eba34e2bea257248e75f2087edbfd4d0a0d0aeff3ff537cf3e694" exitCode=0 Jan 23 18:06:50 crc kubenswrapper[4760]: I0123 18:06:50.712917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn5nq" event={"ID":"8e461375-5185-45d5-9abe-89c57c170d0c","Type":"ContainerDied","Data":"1db37d70660eba34e2bea257248e75f2087edbfd4d0a0d0aeff3ff537cf3e694"} Jan 23 18:06:50 crc kubenswrapper[4760]: I0123 18:06:50.713140 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn5nq" event={"ID":"8e461375-5185-45d5-9abe-89c57c170d0c","Type":"ContainerStarted","Data":"6d034a290462d161f60fcfca8c73c186f67b3a3841690363562b66f95342c817"} Jan 23 18:06:50 crc kubenswrapper[4760]: I0123 18:06:50.715785 4760 generic.go:334] "Generic (PLEG): container finished" podID="d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c" containerID="dddeed8aea2b887f6e04b49b2084072b5d5889f0e1b14abf9c57b1d354aa1d60" exitCode=0 Jan 23 18:06:50 crc kubenswrapper[4760]: I0123 18:06:50.715840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f29" event={"ID":"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c","Type":"ContainerDied","Data":"dddeed8aea2b887f6e04b49b2084072b5d5889f0e1b14abf9c57b1d354aa1d60"} Jan 23 18:06:50 crc kubenswrapper[4760]: I0123 18:06:50.715862 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f29" event={"ID":"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c","Type":"ContainerStarted","Data":"592daa1c5119e7bc94dfd85b7565d1d8a36e624def61b89a7983c3b3b4906484"} Jan 23 18:06:50 crc kubenswrapper[4760]: I0123 18:06:50.719923 4760 generic.go:334] "Generic (PLEG): container finished" podID="6b1cac32-4085-41bd-83a1-f3488a2ca17f" containerID="ce615b1f02e0e60cc33905c4d54c828a425b97997cbd3aaff89feace88e2f913" exitCode=0 Jan 23 18:06:50 crc kubenswrapper[4760]: I0123 18:06:50.719971 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mmwq" event={"ID":"6b1cac32-4085-41bd-83a1-f3488a2ca17f","Type":"ContainerDied","Data":"ce615b1f02e0e60cc33905c4d54c828a425b97997cbd3aaff89feace88e2f913"} Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.464859 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q5h26"] Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.469360 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.471874 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.471941 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5h26"] Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.591766 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjc5h\" (UniqueName: \"kubernetes.io/projected/139f0f56-e7d6-4950-8313-ba6047ab4955-kube-api-access-wjc5h\") pod \"redhat-marketplace-q5h26\" (UID: \"139f0f56-e7d6-4950-8313-ba6047ab4955\") " pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.592138 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139f0f56-e7d6-4950-8313-ba6047ab4955-utilities\") pod \"redhat-marketplace-q5h26\" (UID: \"139f0f56-e7d6-4950-8313-ba6047ab4955\") " pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.592234 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139f0f56-e7d6-4950-8313-ba6047ab4955-catalog-content\") pod \"redhat-marketplace-q5h26\" (UID: \"139f0f56-e7d6-4950-8313-ba6047ab4955\") " pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.693307 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjc5h\" (UniqueName: \"kubernetes.io/projected/139f0f56-e7d6-4950-8313-ba6047ab4955-kube-api-access-wjc5h\") pod \"redhat-marketplace-q5h26\" (UID: \"139f0f56-e7d6-4950-8313-ba6047ab4955\") " pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.693373 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139f0f56-e7d6-4950-8313-ba6047ab4955-utilities\") pod \"redhat-marketplace-q5h26\" (UID: \"139f0f56-e7d6-4950-8313-ba6047ab4955\") " pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.693399 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139f0f56-e7d6-4950-8313-ba6047ab4955-catalog-content\") pod \"redhat-marketplace-q5h26\" (UID: \"139f0f56-e7d6-4950-8313-ba6047ab4955\") " pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.693786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139f0f56-e7d6-4950-8313-ba6047ab4955-utilities\") pod \"redhat-marketplace-q5h26\" (UID: \"139f0f56-e7d6-4950-8313-ba6047ab4955\") " pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.693805 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139f0f56-e7d6-4950-8313-ba6047ab4955-catalog-content\") pod \"redhat-marketplace-q5h26\" (UID: \"139f0f56-e7d6-4950-8313-ba6047ab4955\") " pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.713841 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjc5h\" (UniqueName: \"kubernetes.io/projected/139f0f56-e7d6-4950-8313-ba6047ab4955-kube-api-access-wjc5h\") pod \"redhat-marketplace-q5h26\" (UID: \"139f0f56-e7d6-4950-8313-ba6047ab4955\") " pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.726882 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn5nq" event={"ID":"8e461375-5185-45d5-9abe-89c57c170d0c","Type":"ContainerStarted","Data":"c681223a1dbc2d9c785876e95fe433eef6b118d4ccf2a9b12369d20c8692a082"} Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.728982 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f29" event={"ID":"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c","Type":"ContainerStarted","Data":"aa490c3c5c9170310f9e816991b4b5be82c74b30cd3d0ac060ad47a438c25317"} Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.731245 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mmwq" event={"ID":"6b1cac32-4085-41bd-83a1-f3488a2ca17f","Type":"ContainerStarted","Data":"7728b1627a8a9f4d1f6f3b5d57064922e243b58402a252ac6a2590cfbc7fa5a9"} Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.781928 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2mmwq" podStartSLOduration=2.408152087 podStartE2EDuration="4.781906828s" podCreationTimestamp="2026-01-23 18:06:47 +0000 UTC" firstStartedPulling="2026-01-23 18:06:48.708528365 +0000 UTC m=+351.710986338" lastFinishedPulling="2026-01-23 18:06:51.082283146 +0000 UTC m=+354.084741079" observedRunningTime="2026-01-23 18:06:51.779077649 +0000 UTC m=+354.781535582" watchObservedRunningTime="2026-01-23 18:06:51.781906828 +0000 UTC m=+354.784364761" Jan 23 18:06:51 crc kubenswrapper[4760]: I0123 18:06:51.792781 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:06:52 crc kubenswrapper[4760]: I0123 18:06:52.176180 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5h26"] Jan 23 18:06:52 crc kubenswrapper[4760]: W0123 18:06:52.182372 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod139f0f56_e7d6_4950_8313_ba6047ab4955.slice/crio-091b98454c66468b538953c3a5cad50c2a15ff7c258d5b84b17371178a1cf71b WatchSource:0}: Error finding container 091b98454c66468b538953c3a5cad50c2a15ff7c258d5b84b17371178a1cf71b: Status 404 returned error can't find the container with id 091b98454c66468b538953c3a5cad50c2a15ff7c258d5b84b17371178a1cf71b Jan 23 18:06:52 crc kubenswrapper[4760]: I0123 18:06:52.736800 4760 generic.go:334] "Generic (PLEG): container finished" podID="139f0f56-e7d6-4950-8313-ba6047ab4955" containerID="44aae436ca294489c3d199ecdfa359439a6b5fa7da2637cf903a0ab919133ce6" exitCode=0 Jan 23 18:06:52 crc kubenswrapper[4760]: I0123 18:06:52.737040 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5h26" event={"ID":"139f0f56-e7d6-4950-8313-ba6047ab4955","Type":"ContainerDied","Data":"44aae436ca294489c3d199ecdfa359439a6b5fa7da2637cf903a0ab919133ce6"} Jan 23 18:06:52 crc kubenswrapper[4760]: I0123 18:06:52.737064 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5h26" event={"ID":"139f0f56-e7d6-4950-8313-ba6047ab4955","Type":"ContainerStarted","Data":"091b98454c66468b538953c3a5cad50c2a15ff7c258d5b84b17371178a1cf71b"} Jan 23 18:06:52 crc kubenswrapper[4760]: I0123 18:06:52.741131 4760 generic.go:334] "Generic (PLEG): container finished" podID="8e461375-5185-45d5-9abe-89c57c170d0c" containerID="c681223a1dbc2d9c785876e95fe433eef6b118d4ccf2a9b12369d20c8692a082" exitCode=0 Jan 23 18:06:52 crc kubenswrapper[4760]: I0123 18:06:52.741166 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn5nq" event={"ID":"8e461375-5185-45d5-9abe-89c57c170d0c","Type":"ContainerDied","Data":"c681223a1dbc2d9c785876e95fe433eef6b118d4ccf2a9b12369d20c8692a082"} Jan 23 18:06:52 crc kubenswrapper[4760]: I0123 18:06:52.743561 4760 generic.go:334] "Generic (PLEG): container finished" podID="d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c" containerID="aa490c3c5c9170310f9e816991b4b5be82c74b30cd3d0ac060ad47a438c25317" exitCode=0 Jan 23 18:06:52 crc kubenswrapper[4760]: I0123 18:06:52.744197 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f29" event={"ID":"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c","Type":"ContainerDied","Data":"aa490c3c5c9170310f9e816991b4b5be82c74b30cd3d0ac060ad47a438c25317"} Jan 23 18:06:53 crc kubenswrapper[4760]: I0123 18:06:53.751020 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn5nq" event={"ID":"8e461375-5185-45d5-9abe-89c57c170d0c","Type":"ContainerStarted","Data":"519d8d9f28591df520b6acf4530dcf82c1949941530cdcf6e2e03a46733beee5"} Jan 23 18:06:53 crc kubenswrapper[4760]: I0123 18:06:53.753522 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f29" event={"ID":"d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c","Type":"ContainerStarted","Data":"c009535253870af8824355033800e2c9550181983155d665173fcb73349f9abe"} Jan 23 18:06:53 crc kubenswrapper[4760]: I0123 18:06:53.809583 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hn5nq" podStartSLOduration=2.277867346 podStartE2EDuration="4.809568376s" podCreationTimestamp="2026-01-23 18:06:49 +0000 UTC" firstStartedPulling="2026-01-23 18:06:50.713888218 +0000 UTC m=+353.716346151" lastFinishedPulling="2026-01-23 18:06:53.245589248 +0000 UTC m=+356.248047181" observedRunningTime="2026-01-23 18:06:53.771629677 +0000 UTC m=+356.774087620" watchObservedRunningTime="2026-01-23 18:06:53.809568376 +0000 UTC m=+356.812026309" Jan 23 18:06:53 crc kubenswrapper[4760]: I0123 18:06:53.811543 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-94f29" podStartSLOduration=2.300585463 podStartE2EDuration="4.811536768s" podCreationTimestamp="2026-01-23 18:06:49 +0000 UTC" firstStartedPulling="2026-01-23 18:06:50.717481931 +0000 UTC m=+353.719939864" lastFinishedPulling="2026-01-23 18:06:53.228433236 +0000 UTC m=+356.230891169" observedRunningTime="2026-01-23 18:06:53.807821601 +0000 UTC m=+356.810279534" watchObservedRunningTime="2026-01-23 18:06:53.811536768 +0000 UTC m=+356.813994701" Jan 23 18:06:54 crc kubenswrapper[4760]: I0123 18:06:54.761118 4760 generic.go:334] "Generic (PLEG): container finished" podID="139f0f56-e7d6-4950-8313-ba6047ab4955" containerID="dd00d68372c46a026c21ad51ccfc80d427bc868d14e222ea5616c0a7e7ee364e" exitCode=0 Jan 23 18:06:54 crc kubenswrapper[4760]: I0123 18:06:54.761976 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5h26" event={"ID":"139f0f56-e7d6-4950-8313-ba6047ab4955","Type":"ContainerDied","Data":"dd00d68372c46a026c21ad51ccfc80d427bc868d14e222ea5616c0a7e7ee364e"} Jan 23 18:06:56 crc kubenswrapper[4760]: I0123 18:06:56.773757 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5h26" event={"ID":"139f0f56-e7d6-4950-8313-ba6047ab4955","Type":"ContainerStarted","Data":"296865542b3eb8530f5307e14805ab3e809f496cc466de1ba16d4e5a1959473b"} Jan 23 18:06:56 crc kubenswrapper[4760]: I0123 18:06:56.795625 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q5h26" podStartSLOduration=2.904905948 podStartE2EDuration="5.795611251s" podCreationTimestamp="2026-01-23 18:06:51 +0000 UTC" firstStartedPulling="2026-01-23 18:06:52.738429966 +0000 UTC m=+355.740887899" lastFinishedPulling="2026-01-23 18:06:55.629135259 +0000 UTC m=+358.631593202" observedRunningTime="2026-01-23 18:06:56.792070448 +0000 UTC m=+359.794528371" watchObservedRunningTime="2026-01-23 18:06:56.795611251 +0000 UTC m=+359.798069174" Jan 23 18:06:57 crc kubenswrapper[4760]: I0123 18:06:57.585006 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:57 crc kubenswrapper[4760]: I0123 18:06:57.585046 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:57 crc kubenswrapper[4760]: I0123 18:06:57.624472 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:57 crc kubenswrapper[4760]: I0123 18:06:57.814767 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2mmwq" Jan 23 18:06:59 crc kubenswrapper[4760]: I0123 18:06:59.404537 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:59 crc kubenswrapper[4760]: I0123 18:06:59.404853 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:59 crc kubenswrapper[4760]: I0123 18:06:59.442581 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:59 crc kubenswrapper[4760]: I0123 18:06:59.835781 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-94f29" Jan 23 18:06:59 crc kubenswrapper[4760]: I0123 18:06:59.985259 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:06:59 crc kubenswrapper[4760]: I0123 18:06:59.985563 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:07:00 crc kubenswrapper[4760]: I0123 18:07:00.021247 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:07:00 crc kubenswrapper[4760]: I0123 18:07:00.842385 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hn5nq" Jan 23 18:07:01 crc kubenswrapper[4760]: I0123 18:07:01.793506 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:07:01 crc kubenswrapper[4760]: I0123 18:07:01.794132 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:07:01 crc kubenswrapper[4760]: I0123 18:07:01.835060 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:07:02 crc kubenswrapper[4760]: I0123 18:07:02.851987 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q5h26" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.748682 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rzrgq"] Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.749535 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.770965 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rzrgq"] Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.838309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee9759c7-a010-43f4-a9fa-42c97e8714e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.838659 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee9759c7-a010-43f4-a9fa-42c97e8714e7-registry-certificates\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.838777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee9759c7-a010-43f4-a9fa-42c97e8714e7-bound-sa-token\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.838874 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee9759c7-a010-43f4-a9fa-42c97e8714e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.838960 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.839046 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpglp\" (UniqueName: \"kubernetes.io/projected/ee9759c7-a010-43f4-a9fa-42c97e8714e7-kube-api-access-dpglp\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.839265 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee9759c7-a010-43f4-a9fa-42c97e8714e7-registry-tls\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.839325 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee9759c7-a010-43f4-a9fa-42c97e8714e7-trusted-ca\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.877467 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.940352 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee9759c7-a010-43f4-a9fa-42c97e8714e7-registry-tls\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.940441 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee9759c7-a010-43f4-a9fa-42c97e8714e7-trusted-ca\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.940480 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee9759c7-a010-43f4-a9fa-42c97e8714e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.940510 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee9759c7-a010-43f4-a9fa-42c97e8714e7-registry-certificates\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.940641 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee9759c7-a010-43f4-a9fa-42c97e8714e7-bound-sa-token\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.941366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee9759c7-a010-43f4-a9fa-42c97e8714e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.941425 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpglp\" (UniqueName: \"kubernetes.io/projected/ee9759c7-a010-43f4-a9fa-42c97e8714e7-kube-api-access-dpglp\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.941609 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee9759c7-a010-43f4-a9fa-42c97e8714e7-trusted-ca\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.941821 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee9759c7-a010-43f4-a9fa-42c97e8714e7-registry-certificates\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.941929 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee9759c7-a010-43f4-a9fa-42c97e8714e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.957372 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee9759c7-a010-43f4-a9fa-42c97e8714e7-registry-tls\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.957421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee9759c7-a010-43f4-a9fa-42c97e8714e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.962942 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee9759c7-a010-43f4-a9fa-42c97e8714e7-bound-sa-token\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:03 crc kubenswrapper[4760]: I0123 18:07:03.969274 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpglp\" (UniqueName: \"kubernetes.io/projected/ee9759c7-a010-43f4-a9fa-42c97e8714e7-kube-api-access-dpglp\") pod \"image-registry-66df7c8f76-rzrgq\" (UID: \"ee9759c7-a010-43f4-a9fa-42c97e8714e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:04 crc kubenswrapper[4760]: I0123 18:07:04.065168 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:04 crc kubenswrapper[4760]: I0123 18:07:04.568124 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rzrgq"] Jan 23 18:07:04 crc kubenswrapper[4760]: W0123 18:07:04.572978 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee9759c7_a010_43f4_a9fa_42c97e8714e7.slice/crio-77380507c937405c0aa272af5b5c57dbfc299bde2cd7174c0083f21ad47f549d WatchSource:0}: Error finding container 77380507c937405c0aa272af5b5c57dbfc299bde2cd7174c0083f21ad47f549d: Status 404 returned error can't find the container with id 77380507c937405c0aa272af5b5c57dbfc299bde2cd7174c0083f21ad47f549d Jan 23 18:07:04 crc kubenswrapper[4760]: I0123 18:07:04.818796 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" event={"ID":"ee9759c7-a010-43f4-a9fa-42c97e8714e7","Type":"ContainerStarted","Data":"77380507c937405c0aa272af5b5c57dbfc299bde2cd7174c0083f21ad47f549d"} Jan 23 18:07:06 crc kubenswrapper[4760]: I0123 18:07:06.386721 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm"] Jan 23 18:07:06 crc kubenswrapper[4760]: I0123 18:07:06.386963 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" podUID="571d4c2b-89b1-4622-a1e1-741208075455" containerName="route-controller-manager" containerID="cri-o://58250d864f7b1c3cd0411be786857a06a4619961d1854704a776be4923342662" gracePeriod=30 Jan 23 18:07:06 crc kubenswrapper[4760]: I0123 18:07:06.830108 4760 generic.go:334] "Generic (PLEG): container finished" podID="571d4c2b-89b1-4622-a1e1-741208075455" containerID="58250d864f7b1c3cd0411be786857a06a4619961d1854704a776be4923342662" exitCode=0 Jan 23 18:07:06 crc kubenswrapper[4760]: I0123 18:07:06.830194 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" event={"ID":"571d4c2b-89b1-4622-a1e1-741208075455","Type":"ContainerDied","Data":"58250d864f7b1c3cd0411be786857a06a4619961d1854704a776be4923342662"} Jan 23 18:07:06 crc kubenswrapper[4760]: I0123 18:07:06.832088 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" event={"ID":"ee9759c7-a010-43f4-a9fa-42c97e8714e7","Type":"ContainerStarted","Data":"5b1473aa7765d01fe6775c2ac51501f0faabce47fda99aad4f57563fccf9d5be"} Jan 23 18:07:06 crc kubenswrapper[4760]: I0123 18:07:06.832238 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:06 crc kubenswrapper[4760]: I0123 18:07:06.849692 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" podStartSLOduration=3.849674656 podStartE2EDuration="3.849674656s" podCreationTimestamp="2026-01-23 18:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:07:06.848378005 +0000 UTC m=+369.850835938" watchObservedRunningTime="2026-01-23 18:07:06.849674656 +0000 UTC m=+369.852132589" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.314642 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.383201 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-client-ca\") pod \"571d4c2b-89b1-4622-a1e1-741208075455\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.383277 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxdw7\" (UniqueName: \"kubernetes.io/projected/571d4c2b-89b1-4622-a1e1-741208075455-kube-api-access-mxdw7\") pod \"571d4c2b-89b1-4622-a1e1-741208075455\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.383308 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-config\") pod \"571d4c2b-89b1-4622-a1e1-741208075455\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.383367 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/571d4c2b-89b1-4622-a1e1-741208075455-serving-cert\") pod \"571d4c2b-89b1-4622-a1e1-741208075455\" (UID: \"571d4c2b-89b1-4622-a1e1-741208075455\") " Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.384016 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-client-ca" (OuterVolumeSpecName: "client-ca") pod "571d4c2b-89b1-4622-a1e1-741208075455" (UID: "571d4c2b-89b1-4622-a1e1-741208075455"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.384428 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-config" (OuterVolumeSpecName: "config") pod "571d4c2b-89b1-4622-a1e1-741208075455" (UID: "571d4c2b-89b1-4622-a1e1-741208075455"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.388527 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/571d4c2b-89b1-4622-a1e1-741208075455-kube-api-access-mxdw7" (OuterVolumeSpecName: "kube-api-access-mxdw7") pod "571d4c2b-89b1-4622-a1e1-741208075455" (UID: "571d4c2b-89b1-4622-a1e1-741208075455"). InnerVolumeSpecName "kube-api-access-mxdw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.388700 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/571d4c2b-89b1-4622-a1e1-741208075455-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "571d4c2b-89b1-4622-a1e1-741208075455" (UID: "571d4c2b-89b1-4622-a1e1-741208075455"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.484823 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxdw7\" (UniqueName: \"kubernetes.io/projected/571d4c2b-89b1-4622-a1e1-741208075455-kube-api-access-mxdw7\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.485176 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.485251 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/571d4c2b-89b1-4622-a1e1-741208075455-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.485272 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/571d4c2b-89b1-4622-a1e1-741208075455-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.646553 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp"] Jan 23 18:07:07 crc kubenswrapper[4760]: E0123 18:07:07.648374 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571d4c2b-89b1-4622-a1e1-741208075455" containerName="route-controller-manager" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.648428 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="571d4c2b-89b1-4622-a1e1-741208075455" containerName="route-controller-manager" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.648531 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="571d4c2b-89b1-4622-a1e1-741208075455" containerName="route-controller-manager" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.648916 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.656649 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp"] Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.790973 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e8445d-7492-4a52-897b-420511800dc0-serving-cert\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.791247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e8445d-7492-4a52-897b-420511800dc0-config\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.791392 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trw2t\" (UniqueName: \"kubernetes.io/projected/47e8445d-7492-4a52-897b-420511800dc0-kube-api-access-trw2t\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.791510 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e8445d-7492-4a52-897b-420511800dc0-client-ca\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.838365 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" event={"ID":"571d4c2b-89b1-4622-a1e1-741208075455","Type":"ContainerDied","Data":"2760521ac7de978719f27df2bc21dd6d5e0ac9c6c964df9b35ed57cfe204b4fc"} Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.838378 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.838431 4760 scope.go:117] "RemoveContainer" containerID="58250d864f7b1c3cd0411be786857a06a4619961d1854704a776be4923342662" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.857338 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm"] Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.860717 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-xdnqm"] Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.892799 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e8445d-7492-4a52-897b-420511800dc0-serving-cert\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.892891 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e8445d-7492-4a52-897b-420511800dc0-config\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.893032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trw2t\" (UniqueName: \"kubernetes.io/projected/47e8445d-7492-4a52-897b-420511800dc0-kube-api-access-trw2t\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.893222 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e8445d-7492-4a52-897b-420511800dc0-client-ca\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.894781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e8445d-7492-4a52-897b-420511800dc0-client-ca\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.897240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e8445d-7492-4a52-897b-420511800dc0-config\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.897540 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e8445d-7492-4a52-897b-420511800dc0-serving-cert\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.908898 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trw2t\" (UniqueName: \"kubernetes.io/projected/47e8445d-7492-4a52-897b-420511800dc0-kube-api-access-trw2t\") pod \"route-controller-manager-987dfdb5-rvlvp\" (UID: \"47e8445d-7492-4a52-897b-420511800dc0\") " pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:07 crc kubenswrapper[4760]: I0123 18:07:07.966314 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:08 crc kubenswrapper[4760]: I0123 18:07:08.365385 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp"] Jan 23 18:07:08 crc kubenswrapper[4760]: W0123 18:07:08.377376 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47e8445d_7492_4a52_897b_420511800dc0.slice/crio-fc093a5f7b1f5bfa0b2023c612f78b09a051caa3b7d08e322b63eaff63c43e80 WatchSource:0}: Error finding container fc093a5f7b1f5bfa0b2023c612f78b09a051caa3b7d08e322b63eaff63c43e80: Status 404 returned error can't find the container with id fc093a5f7b1f5bfa0b2023c612f78b09a051caa3b7d08e322b63eaff63c43e80 Jan 23 18:07:08 crc kubenswrapper[4760]: I0123 18:07:08.845090 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" event={"ID":"47e8445d-7492-4a52-897b-420511800dc0","Type":"ContainerStarted","Data":"c2f27a3a9ab9ffe722ce3779dd1ec52fcfd03945792960bb65e5a76fa8cac474"} Jan 23 18:07:08 crc kubenswrapper[4760]: I0123 18:07:08.845135 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" event={"ID":"47e8445d-7492-4a52-897b-420511800dc0","Type":"ContainerStarted","Data":"fc093a5f7b1f5bfa0b2023c612f78b09a051caa3b7d08e322b63eaff63c43e80"} Jan 23 18:07:08 crc kubenswrapper[4760]: I0123 18:07:08.845172 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:08 crc kubenswrapper[4760]: I0123 18:07:08.870092 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" podStartSLOduration=2.870070354 podStartE2EDuration="2.870070354s" podCreationTimestamp="2026-01-23 18:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:07:08.86488039 +0000 UTC m=+371.867338323" watchObservedRunningTime="2026-01-23 18:07:08.870070354 +0000 UTC m=+371.872528307" Jan 23 18:07:08 crc kubenswrapper[4760]: I0123 18:07:08.991386 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-987dfdb5-rvlvp" Jan 23 18:07:09 crc kubenswrapper[4760]: I0123 18:07:09.600930 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="571d4c2b-89b1-4622-a1e1-741208075455" path="/var/lib/kubelet/pods/571d4c2b-89b1-4622-a1e1-741208075455/volumes" Jan 23 18:07:16 crc kubenswrapper[4760]: I0123 18:07:16.075609 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:07:16 crc kubenswrapper[4760]: I0123 18:07:16.075916 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:07:24 crc kubenswrapper[4760]: I0123 18:07:24.072754 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-rzrgq" Jan 23 18:07:24 crc kubenswrapper[4760]: I0123 18:07:24.140353 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9d9kf"] Jan 23 18:07:46 crc kubenswrapper[4760]: I0123 18:07:46.075467 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:07:46 crc kubenswrapper[4760]: I0123 18:07:46.076708 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.176575 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" podUID="c1747fa8-d09e-4415-ab0e-e607a674dfbb" containerName="registry" containerID="cri-o://8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1" gracePeriod=30 Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.535095 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.645841 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1747fa8-d09e-4415-ab0e-e607a674dfbb-installation-pull-secrets\") pod \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.645968 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1747fa8-d09e-4415-ab0e-e607a674dfbb-ca-trust-extracted\") pod \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.646005 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-bound-sa-token\") pod \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.646036 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98nph\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-kube-api-access-98nph\") pod \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.646217 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.646270 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-trusted-ca\") pod \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.646302 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-certificates\") pod \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.646359 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-tls\") pod \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\" (UID: \"c1747fa8-d09e-4415-ab0e-e607a674dfbb\") " Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.647049 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c1747fa8-d09e-4415-ab0e-e607a674dfbb" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.647212 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c1747fa8-d09e-4415-ab0e-e607a674dfbb" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.651943 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1747fa8-d09e-4415-ab0e-e607a674dfbb-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c1747fa8-d09e-4415-ab0e-e607a674dfbb" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.652228 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c1747fa8-d09e-4415-ab0e-e607a674dfbb" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.652599 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c1747fa8-d09e-4415-ab0e-e607a674dfbb" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.655155 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-kube-api-access-98nph" (OuterVolumeSpecName: "kube-api-access-98nph") pod "c1747fa8-d09e-4415-ab0e-e607a674dfbb" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb"). InnerVolumeSpecName "kube-api-access-98nph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.659435 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c1747fa8-d09e-4415-ab0e-e607a674dfbb" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.665119 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1747fa8-d09e-4415-ab0e-e607a674dfbb-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c1747fa8-d09e-4415-ab0e-e607a674dfbb" (UID: "c1747fa8-d09e-4415-ab0e-e607a674dfbb"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.747393 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.747469 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.747489 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1747fa8-d09e-4415-ab0e-e607a674dfbb-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.747508 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1747fa8-d09e-4415-ab0e-e607a674dfbb-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.747525 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98nph\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-kube-api-access-98nph\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.747541 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1747fa8-d09e-4415-ab0e-e607a674dfbb-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:49 crc kubenswrapper[4760]: I0123 18:07:49.747557 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1747fa8-d09e-4415-ab0e-e607a674dfbb-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:50 crc kubenswrapper[4760]: I0123 18:07:50.070762 4760 generic.go:334] "Generic (PLEG): container finished" podID="c1747fa8-d09e-4415-ab0e-e607a674dfbb" containerID="8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1" exitCode=0 Jan 23 18:07:50 crc kubenswrapper[4760]: I0123 18:07:50.070815 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" event={"ID":"c1747fa8-d09e-4415-ab0e-e607a674dfbb","Type":"ContainerDied","Data":"8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1"} Jan 23 18:07:50 crc kubenswrapper[4760]: I0123 18:07:50.070852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" event={"ID":"c1747fa8-d09e-4415-ab0e-e607a674dfbb","Type":"ContainerDied","Data":"0b175298e35a76269616152438b777c8a236af3144f9b5831214065a5247d8c5"} Jan 23 18:07:50 crc kubenswrapper[4760]: I0123 18:07:50.070866 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9d9kf" Jan 23 18:07:50 crc kubenswrapper[4760]: I0123 18:07:50.070875 4760 scope.go:117] "RemoveContainer" containerID="8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1" Jan 23 18:07:50 crc kubenswrapper[4760]: I0123 18:07:50.089857 4760 scope.go:117] "RemoveContainer" containerID="8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1" Jan 23 18:07:50 crc kubenswrapper[4760]: E0123 18:07:50.090821 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1\": container with ID starting with 8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1 not found: ID does not exist" containerID="8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1" Jan 23 18:07:50 crc kubenswrapper[4760]: I0123 18:07:50.090862 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1"} err="failed to get container status \"8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1\": rpc error: code = NotFound desc = could not find container \"8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1\": container with ID starting with 8ad7bf9545b8600ea923b2681aeb71de03b13616eb3f3b7fcb77b7f7d23797d1 not found: ID does not exist" Jan 23 18:07:50 crc kubenswrapper[4760]: I0123 18:07:50.123321 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9d9kf"] Jan 23 18:07:50 crc kubenswrapper[4760]: I0123 18:07:50.130994 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9d9kf"] Jan 23 18:07:51 crc kubenswrapper[4760]: I0123 18:07:51.603668 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1747fa8-d09e-4415-ab0e-e607a674dfbb" path="/var/lib/kubelet/pods/c1747fa8-d09e-4415-ab0e-e607a674dfbb/volumes" Jan 23 18:08:16 crc kubenswrapper[4760]: I0123 18:08:16.075543 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:08:16 crc kubenswrapper[4760]: I0123 18:08:16.076187 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:08:16 crc kubenswrapper[4760]: I0123 18:08:16.076238 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:08:16 crc kubenswrapper[4760]: I0123 18:08:16.076858 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6642c703214bc9e29baea5dfa1c582c054b72a25e44988dec0644e8c8d5ca200"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:08:16 crc kubenswrapper[4760]: I0123 18:08:16.076918 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://6642c703214bc9e29baea5dfa1c582c054b72a25e44988dec0644e8c8d5ca200" gracePeriod=600 Jan 23 18:08:17 crc kubenswrapper[4760]: I0123 18:08:17.234566 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="6642c703214bc9e29baea5dfa1c582c054b72a25e44988dec0644e8c8d5ca200" exitCode=0 Jan 23 18:08:17 crc kubenswrapper[4760]: I0123 18:08:17.234630 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"6642c703214bc9e29baea5dfa1c582c054b72a25e44988dec0644e8c8d5ca200"} Jan 23 18:08:17 crc kubenswrapper[4760]: I0123 18:08:17.235020 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"5c263e7f736d504cb47d3c8ca1d08a88c9a005d4eb5b95dcb5630ea2222a2389"} Jan 23 18:08:17 crc kubenswrapper[4760]: I0123 18:08:17.235057 4760 scope.go:117] "RemoveContainer" containerID="9ef275bdc8632c507445a305bfda16cf72693ca8ef6b14fea153ebd9061b3880" Jan 23 18:10:46 crc kubenswrapper[4760]: I0123 18:10:46.076034 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:10:46 crc kubenswrapper[4760]: I0123 18:10:46.076483 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:11:16 crc kubenswrapper[4760]: I0123 18:11:16.075380 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:11:16 crc kubenswrapper[4760]: I0123 18:11:16.075932 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:11:46 crc kubenswrapper[4760]: I0123 18:11:46.075276 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:11:46 crc kubenswrapper[4760]: I0123 18:11:46.075842 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:11:46 crc kubenswrapper[4760]: I0123 18:11:46.075892 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:11:46 crc kubenswrapper[4760]: I0123 18:11:46.076402 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c263e7f736d504cb47d3c8ca1d08a88c9a005d4eb5b95dcb5630ea2222a2389"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:11:46 crc kubenswrapper[4760]: I0123 18:11:46.076482 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://5c263e7f736d504cb47d3c8ca1d08a88c9a005d4eb5b95dcb5630ea2222a2389" gracePeriod=600 Jan 23 18:11:46 crc kubenswrapper[4760]: I0123 18:11:46.611017 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="5c263e7f736d504cb47d3c8ca1d08a88c9a005d4eb5b95dcb5630ea2222a2389" exitCode=0 Jan 23 18:11:46 crc kubenswrapper[4760]: I0123 18:11:46.611072 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"5c263e7f736d504cb47d3c8ca1d08a88c9a005d4eb5b95dcb5630ea2222a2389"} Jan 23 18:11:46 crc kubenswrapper[4760]: I0123 18:11:46.611387 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"ba503457cf1516c95b31c578f53ac143902b2c5fe146afa02b8c1856b3d9d060"} Jan 23 18:11:46 crc kubenswrapper[4760]: I0123 18:11:46.611449 4760 scope.go:117] "RemoveContainer" containerID="6642c703214bc9e29baea5dfa1c582c054b72a25e44988dec0644e8c8d5ca200" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.223997 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-t58c6"] Jan 23 18:13:26 crc kubenswrapper[4760]: E0123 18:13:26.224856 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1747fa8-d09e-4415-ab0e-e607a674dfbb" containerName="registry" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.224874 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1747fa8-d09e-4415-ab0e-e607a674dfbb" containerName="registry" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.225014 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1747fa8-d09e-4415-ab0e-e607a674dfbb" containerName="registry" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.225569 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t58c6" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.227384 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.227638 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.233981 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-pnwww"] Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.234009 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xqd2c" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.235345 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-pnwww" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.237984 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-t58c6"] Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.238550 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-4vlxr" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.246653 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-f4cdr"] Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.247606 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-f4cdr" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.251829 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kmwfw" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.264267 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-f4cdr"] Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.272456 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-pnwww"] Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.377697 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68c2x\" (UniqueName: \"kubernetes.io/projected/0a7dda60-1788-4458-b1c4-fa4ecfd723a2-kube-api-access-68c2x\") pod \"cert-manager-cainjector-cf98fcc89-t58c6\" (UID: \"0a7dda60-1788-4458-b1c4-fa4ecfd723a2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-t58c6" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.377805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwckh\" (UniqueName: \"kubernetes.io/projected/9ef58368-3cff-49ff-8dfd-17ae3ff9e710-kube-api-access-lwckh\") pod \"cert-manager-858654f9db-pnwww\" (UID: \"9ef58368-3cff-49ff-8dfd-17ae3ff9e710\") " pod="cert-manager/cert-manager-858654f9db-pnwww" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.377873 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5qz\" (UniqueName: \"kubernetes.io/projected/8eed5962-6318-49b8-82a5-7f10b629d81c-kube-api-access-pg5qz\") pod \"cert-manager-webhook-687f57d79b-f4cdr\" (UID: \"8eed5962-6318-49b8-82a5-7f10b629d81c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-f4cdr" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.478480 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg5qz\" (UniqueName: \"kubernetes.io/projected/8eed5962-6318-49b8-82a5-7f10b629d81c-kube-api-access-pg5qz\") pod \"cert-manager-webhook-687f57d79b-f4cdr\" (UID: \"8eed5962-6318-49b8-82a5-7f10b629d81c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-f4cdr" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.478545 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68c2x\" (UniqueName: \"kubernetes.io/projected/0a7dda60-1788-4458-b1c4-fa4ecfd723a2-kube-api-access-68c2x\") pod \"cert-manager-cainjector-cf98fcc89-t58c6\" (UID: \"0a7dda60-1788-4458-b1c4-fa4ecfd723a2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-t58c6" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.478582 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwckh\" (UniqueName: \"kubernetes.io/projected/9ef58368-3cff-49ff-8dfd-17ae3ff9e710-kube-api-access-lwckh\") pod \"cert-manager-858654f9db-pnwww\" (UID: \"9ef58368-3cff-49ff-8dfd-17ae3ff9e710\") " pod="cert-manager/cert-manager-858654f9db-pnwww" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.498484 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68c2x\" (UniqueName: \"kubernetes.io/projected/0a7dda60-1788-4458-b1c4-fa4ecfd723a2-kube-api-access-68c2x\") pod \"cert-manager-cainjector-cf98fcc89-t58c6\" (UID: \"0a7dda60-1788-4458-b1c4-fa4ecfd723a2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-t58c6" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.501067 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg5qz\" (UniqueName: \"kubernetes.io/projected/8eed5962-6318-49b8-82a5-7f10b629d81c-kube-api-access-pg5qz\") pod \"cert-manager-webhook-687f57d79b-f4cdr\" (UID: \"8eed5962-6318-49b8-82a5-7f10b629d81c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-f4cdr" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.504626 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwckh\" (UniqueName: \"kubernetes.io/projected/9ef58368-3cff-49ff-8dfd-17ae3ff9e710-kube-api-access-lwckh\") pod \"cert-manager-858654f9db-pnwww\" (UID: \"9ef58368-3cff-49ff-8dfd-17ae3ff9e710\") " pod="cert-manager/cert-manager-858654f9db-pnwww" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.550337 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t58c6" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.564259 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-pnwww" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.574836 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-f4cdr" Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.873697 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-pnwww"] Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.886059 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:13:26 crc kubenswrapper[4760]: W0123 18:13:26.972813 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a7dda60_1788_4458_b1c4_fa4ecfd723a2.slice/crio-f044595d711819e0f4e71dfae465f550b58154fdfcb3ecc0f3674f689da96c60 WatchSource:0}: Error finding container f044595d711819e0f4e71dfae465f550b58154fdfcb3ecc0f3674f689da96c60: Status 404 returned error can't find the container with id f044595d711819e0f4e71dfae465f550b58154fdfcb3ecc0f3674f689da96c60 Jan 23 18:13:26 crc kubenswrapper[4760]: I0123 18:13:26.973784 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-t58c6"] Jan 23 18:13:27 crc kubenswrapper[4760]: I0123 18:13:27.035247 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-f4cdr"] Jan 23 18:13:27 crc kubenswrapper[4760]: I0123 18:13:27.171257 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-f4cdr" event={"ID":"8eed5962-6318-49b8-82a5-7f10b629d81c","Type":"ContainerStarted","Data":"1b65c1b7cd306fda21e4cb237ee7cda5a14fa2611620214ef0959fd9633d49d3"} Jan 23 18:13:27 crc kubenswrapper[4760]: I0123 18:13:27.172426 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-pnwww" event={"ID":"9ef58368-3cff-49ff-8dfd-17ae3ff9e710","Type":"ContainerStarted","Data":"e6ec12f7bcc6ad7153fbf39559fcc4bedbdaf14dbd48466c6272a8b55f9e260b"} Jan 23 18:13:27 crc kubenswrapper[4760]: I0123 18:13:27.175142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t58c6" event={"ID":"0a7dda60-1788-4458-b1c4-fa4ecfd723a2","Type":"ContainerStarted","Data":"f044595d711819e0f4e71dfae465f550b58154fdfcb3ecc0f3674f689da96c60"} Jan 23 18:13:31 crc kubenswrapper[4760]: I0123 18:13:31.202005 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-pnwww" event={"ID":"9ef58368-3cff-49ff-8dfd-17ae3ff9e710","Type":"ContainerStarted","Data":"320eac6f06fc178617d42752ac84340eba312d79e0654f4b3d8e25b7de35aa5c"} Jan 23 18:13:31 crc kubenswrapper[4760]: I0123 18:13:31.224837 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-pnwww" podStartSLOduration=1.873870435 podStartE2EDuration="5.224815917s" podCreationTimestamp="2026-01-23 18:13:26 +0000 UTC" firstStartedPulling="2026-01-23 18:13:26.885768235 +0000 UTC m=+749.888226168" lastFinishedPulling="2026-01-23 18:13:30.236713707 +0000 UTC m=+753.239171650" observedRunningTime="2026-01-23 18:13:31.21957761 +0000 UTC m=+754.222035553" watchObservedRunningTime="2026-01-23 18:13:31.224815917 +0000 UTC m=+754.227273860" Jan 23 18:13:33 crc kubenswrapper[4760]: I0123 18:13:33.215373 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t58c6" event={"ID":"0a7dda60-1788-4458-b1c4-fa4ecfd723a2","Type":"ContainerStarted","Data":"4c57a9b01d80fac154cbc00c5c1e4f5fdfa14593ecffcbfefa931e95630c56c8"} Jan 23 18:13:33 crc kubenswrapper[4760]: I0123 18:13:33.216983 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-f4cdr" event={"ID":"8eed5962-6318-49b8-82a5-7f10b629d81c","Type":"ContainerStarted","Data":"9f19cbe1e546c6f437c3dd84fa9b63eb07672139bf2cd868ec53fadda1c3e979"} Jan 23 18:13:33 crc kubenswrapper[4760]: I0123 18:13:33.217108 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-f4cdr" Jan 23 18:13:33 crc kubenswrapper[4760]: I0123 18:13:33.231860 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t58c6" podStartSLOduration=3.901638332 podStartE2EDuration="7.231834022s" podCreationTimestamp="2026-01-23 18:13:26 +0000 UTC" firstStartedPulling="2026-01-23 18:13:26.975348766 +0000 UTC m=+749.977806699" lastFinishedPulling="2026-01-23 18:13:30.305544456 +0000 UTC m=+753.308002389" observedRunningTime="2026-01-23 18:13:33.229671891 +0000 UTC m=+756.232129824" watchObservedRunningTime="2026-01-23 18:13:33.231834022 +0000 UTC m=+756.234291975" Jan 23 18:13:33 crc kubenswrapper[4760]: I0123 18:13:33.248120 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-f4cdr" podStartSLOduration=2.104510832 podStartE2EDuration="7.248088458s" podCreationTimestamp="2026-01-23 18:13:26 +0000 UTC" firstStartedPulling="2026-01-23 18:13:27.040725 +0000 UTC m=+750.043182933" lastFinishedPulling="2026-01-23 18:13:32.184302626 +0000 UTC m=+755.186760559" observedRunningTime="2026-01-23 18:13:33.247214203 +0000 UTC m=+756.249672196" watchObservedRunningTime="2026-01-23 18:13:33.248088458 +0000 UTC m=+756.250546461" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.602093 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-58zkr"] Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.603877 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovn-controller" containerID="cri-o://31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de" gracePeriod=30 Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.604012 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="northd" containerID="cri-o://a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2" gracePeriod=30 Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.604070 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="kube-rbac-proxy-node" containerID="cri-o://1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30" gracePeriod=30 Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.604078 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="sbdb" containerID="cri-o://cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5" gracePeriod=30 Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.604126 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovn-acl-logging" containerID="cri-o://fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e" gracePeriod=30 Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.603995 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f" gracePeriod=30 Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.604001 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="nbdb" containerID="cri-o://d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb" gracePeriod=30 Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.635638 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" containerID="cri-o://1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8" gracePeriod=30 Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.871917 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/3.log" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.874578 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovn-acl-logging/0.log" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.875081 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovn-controller/0.log" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.875573 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.916822 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.916881 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-systemd\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.916900 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-slash\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.916929 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-528cp\" (UniqueName: \"kubernetes.io/projected/03a394da-f311-4268-9011-d781ba14cb3f-kube-api-access-528cp\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917006 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917125 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-netns\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917168 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-netd\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917188 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-var-lib-openvswitch\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917211 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-bin\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917234 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-kubelet\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917267 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-node-log\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917301 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-ovn\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917320 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-ovn-kubernetes\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917342 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-log-socket\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917366 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-systemd-units\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917400 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-etc-openvswitch\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917442 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-env-overrides\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917465 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-config\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917491 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-openvswitch\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917524 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-script-lib\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917545 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03a394da-f311-4268-9011-d781ba14cb3f-ovn-node-metrics-cert\") pod \"03a394da-f311-4268-9011-d781ba14cb3f\" (UID: \"03a394da-f311-4268-9011-d781ba14cb3f\") " Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917723 4760 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917859 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-slash" (OuterVolumeSpecName: "host-slash") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917912 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.917975 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918000 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918025 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918063 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918089 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-node-log" (OuterVolumeSpecName: "node-log") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918114 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918470 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918498 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918504 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918537 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918538 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-log-socket" (OuterVolumeSpecName: "log-socket") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918774 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.918925 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.923346 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a394da-f311-4268-9011-d781ba14cb3f-kube-api-access-528cp" (OuterVolumeSpecName: "kube-api-access-528cp") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "kube-api-access-528cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.925998 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a394da-f311-4268-9011-d781ba14cb3f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.928341 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hlvrp"] Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.928677 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.928774 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.928828 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovn-acl-logging" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.928880 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovn-acl-logging" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.928930 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.928979 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.929025 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovn-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.929074 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovn-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.929148 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.929213 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.929265 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="nbdb" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.929313 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="nbdb" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.929362 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="kubecfg-setup" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.929424 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="kubecfg-setup" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.929515 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="kube-rbac-proxy-node" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.929570 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="kube-rbac-proxy-node" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.929620 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="sbdb" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.929665 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="sbdb" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.929714 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.929758 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.929804 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="northd" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.929856 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="northd" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930024 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovn-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930083 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930132 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930194 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930244 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930291 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="kube-rbac-proxy-node" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930344 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="nbdb" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930401 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="sbdb" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930475 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovn-acl-logging" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930523 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="northd" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930567 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.930701 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930759 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: E0123 18:13:35.930815 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.930864 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.931012 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a394da-f311-4268-9011-d781ba14cb3f" containerName="ovnkube-controller" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.933005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:35 crc kubenswrapper[4760]: I0123 18:13:35.937299 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "03a394da-f311-4268-9011-d781ba14cb3f" (UID: "03a394da-f311-4268-9011-d781ba14cb3f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.018661 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/09c0cc0a-dfab-4935-9f99-3da2334ee068-ovnkube-script-lib\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.018732 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-cni-bin\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.018764 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-kubelet\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.018790 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-run-netns\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.018818 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.018849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-run-openvswitch\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.018872 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-log-socket\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.018908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/09c0cc0a-dfab-4935-9f99-3da2334ee068-ovnkube-config\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.018936 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-var-lib-openvswitch\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019023 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-systemd-units\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019109 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cft5m\" (UniqueName: \"kubernetes.io/projected/09c0cc0a-dfab-4935-9f99-3da2334ee068-kube-api-access-cft5m\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019155 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-run-ovn-kubernetes\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019180 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-etc-openvswitch\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019206 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-cni-netd\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019240 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-run-systemd\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019332 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-slash\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019376 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-node-log\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019515 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/09c0cc0a-dfab-4935-9f99-3da2334ee068-env-overrides\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019571 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/09c0cc0a-dfab-4935-9f99-3da2334ee068-ovn-node-metrics-cert\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019681 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-run-ovn\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019787 4760 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019814 4760 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019829 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-528cp\" (UniqueName: \"kubernetes.io/projected/03a394da-f311-4268-9011-d781ba14cb3f-kube-api-access-528cp\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019845 4760 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019858 4760 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019868 4760 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019879 4760 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019890 4760 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019901 4760 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019912 4760 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019923 4760 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019933 4760 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019944 4760 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019954 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019967 4760 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019977 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.019989 4760 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03a394da-f311-4268-9011-d781ba14cb3f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.020001 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03a394da-f311-4268-9011-d781ba14cb3f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.020012 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03a394da-f311-4268-9011-d781ba14cb3f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.120923 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cft5m\" (UniqueName: \"kubernetes.io/projected/09c0cc0a-dfab-4935-9f99-3da2334ee068-kube-api-access-cft5m\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121273 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-etc-openvswitch\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121303 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-run-ovn-kubernetes\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121327 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-cni-netd\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121446 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-cni-netd\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121449 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-run-ovn-kubernetes\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-etc-openvswitch\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121553 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-run-systemd\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121631 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-run-systemd\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121697 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-slash\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121721 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-node-log\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121746 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/09c0cc0a-dfab-4935-9f99-3da2334ee068-env-overrides\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/09c0cc0a-dfab-4935-9f99-3da2334ee068-ovn-node-metrics-cert\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-run-ovn\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121889 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/09c0cc0a-dfab-4935-9f99-3da2334ee068-ovnkube-script-lib\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-node-log\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-cni-bin\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-cni-bin\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.121918 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-slash\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122024 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-run-ovn\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122191 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-kubelet\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-run-netns\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122489 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-kubelet\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122578 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-run-netns\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122641 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-run-openvswitch\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122734 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-run-openvswitch\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122807 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/09c0cc0a-dfab-4935-9f99-3da2334ee068-env-overrides\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-log-socket\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122890 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-log-socket\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122863 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.122966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/09c0cc0a-dfab-4935-9f99-3da2334ee068-ovnkube-config\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.123014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-var-lib-openvswitch\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.123069 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-var-lib-openvswitch\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.123084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-systemd-units\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.123119 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/09c0cc0a-dfab-4935-9f99-3da2334ee068-systemd-units\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.123019 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/09c0cc0a-dfab-4935-9f99-3da2334ee068-ovnkube-script-lib\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.123673 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/09c0cc0a-dfab-4935-9f99-3da2334ee068-ovnkube-config\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.126045 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/09c0cc0a-dfab-4935-9f99-3da2334ee068-ovn-node-metrics-cert\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.137501 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cft5m\" (UniqueName: \"kubernetes.io/projected/09c0cc0a-dfab-4935-9f99-3da2334ee068-kube-api-access-cft5m\") pod \"ovnkube-node-hlvrp\" (UID: \"09c0cc0a-dfab-4935-9f99-3da2334ee068\") " pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.235448 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovnkube-controller/3.log" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.238137 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovn-acl-logging/0.log" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.238842 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-58zkr_03a394da-f311-4268-9011-d781ba14cb3f/ovn-controller/0.log" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239359 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8" exitCode=0 Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239393 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5" exitCode=0 Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239439 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239449 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb" exitCode=0 Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239504 4760 scope.go:117] "RemoveContainer" containerID="1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239507 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2" exitCode=0 Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239519 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f" exitCode=0 Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239528 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30" exitCode=0 Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239537 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e" exitCode=143 Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239542 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239547 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a394da-f311-4268-9011-d781ba14cb3f" containerID="31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de" exitCode=143 Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239490 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239753 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239769 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239781 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239795 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239808 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239815 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239822 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239829 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239836 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239843 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239851 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239858 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239869 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239882 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239890 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239897 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239904 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239911 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239918 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239925 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239933 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239940 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239946 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239956 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239969 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239977 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239984 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239990 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.239997 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240003 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240009 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240016 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240023 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240029 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240038 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-58zkr" event={"ID":"03a394da-f311-4268-9011-d781ba14cb3f","Type":"ContainerDied","Data":"c875a73d78524a85d0e23411dd91c84150f9863f072e7c72f3588a961aeef846"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240049 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240056 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240064 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240071 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240077 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240084 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240090 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240097 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240103 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.240110 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.244262 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/2.log" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.244779 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/1.log" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.244830 4760 generic.go:334] "Generic (PLEG): container finished" podID="ac96490a-85b1-48f4-99d1-2b7505744007" containerID="e1df9bdba069426e58b0479a2525b9c4dca95a968b7b411df11c56edc4c931cf" exitCode=2 Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.244863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7ck54" event={"ID":"ac96490a-85b1-48f4-99d1-2b7505744007","Type":"ContainerDied","Data":"e1df9bdba069426e58b0479a2525b9c4dca95a968b7b411df11c56edc4c931cf"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.244888 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b"} Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.245319 4760 scope.go:117] "RemoveContainer" containerID="e1df9bdba069426e58b0479a2525b9c4dca95a968b7b411df11c56edc4c931cf" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.262620 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.275369 4760 scope.go:117] "RemoveContainer" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.293294 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-58zkr"] Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.293346 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-58zkr"] Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.306857 4760 scope.go:117] "RemoveContainer" containerID="cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5" Jan 23 18:13:36 crc kubenswrapper[4760]: W0123 18:13:36.330548 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09c0cc0a_dfab_4935_9f99_3da2334ee068.slice/crio-48f72d3064112b2261df41445fb941651ef1af0a5c528087468787d5fc9a69ef WatchSource:0}: Error finding container 48f72d3064112b2261df41445fb941651ef1af0a5c528087468787d5fc9a69ef: Status 404 returned error can't find the container with id 48f72d3064112b2261df41445fb941651ef1af0a5c528087468787d5fc9a69ef Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.339573 4760 scope.go:117] "RemoveContainer" containerID="d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.368006 4760 scope.go:117] "RemoveContainer" containerID="a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.386992 4760 scope.go:117] "RemoveContainer" containerID="af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.412811 4760 scope.go:117] "RemoveContainer" containerID="1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.441971 4760 scope.go:117] "RemoveContainer" containerID="fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.456780 4760 scope.go:117] "RemoveContainer" containerID="31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.478499 4760 scope.go:117] "RemoveContainer" containerID="6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.545214 4760 scope.go:117] "RemoveContainer" containerID="1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8" Jan 23 18:13:36 crc kubenswrapper[4760]: E0123 18:13:36.546196 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": container with ID starting with 1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8 not found: ID does not exist" containerID="1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.546238 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8"} err="failed to get container status \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": rpc error: code = NotFound desc = could not find container \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": container with ID starting with 1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.546265 4760 scope.go:117] "RemoveContainer" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:13:36 crc kubenswrapper[4760]: E0123 18:13:36.547041 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\": container with ID starting with cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f not found: ID does not exist" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.547080 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f"} err="failed to get container status \"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\": rpc error: code = NotFound desc = could not find container \"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\": container with ID starting with cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.547109 4760 scope.go:117] "RemoveContainer" containerID="cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5" Jan 23 18:13:36 crc kubenswrapper[4760]: E0123 18:13:36.547495 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\": container with ID starting with cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5 not found: ID does not exist" containerID="cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.547573 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5"} err="failed to get container status \"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\": rpc error: code = NotFound desc = could not find container \"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\": container with ID starting with cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.547622 4760 scope.go:117] "RemoveContainer" containerID="d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb" Jan 23 18:13:36 crc kubenswrapper[4760]: E0123 18:13:36.548142 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\": container with ID starting with d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb not found: ID does not exist" containerID="d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.548176 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb"} err="failed to get container status \"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\": rpc error: code = NotFound desc = could not find container \"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\": container with ID starting with d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.548195 4760 scope.go:117] "RemoveContainer" containerID="a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2" Jan 23 18:13:36 crc kubenswrapper[4760]: E0123 18:13:36.548477 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\": container with ID starting with a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2 not found: ID does not exist" containerID="a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.548506 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2"} err="failed to get container status \"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\": rpc error: code = NotFound desc = could not find container \"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\": container with ID starting with a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.548524 4760 scope.go:117] "RemoveContainer" containerID="af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f" Jan 23 18:13:36 crc kubenswrapper[4760]: E0123 18:13:36.548867 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\": container with ID starting with af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f not found: ID does not exist" containerID="af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.548892 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f"} err="failed to get container status \"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\": rpc error: code = NotFound desc = could not find container \"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\": container with ID starting with af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.548908 4760 scope.go:117] "RemoveContainer" containerID="1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30" Jan 23 18:13:36 crc kubenswrapper[4760]: E0123 18:13:36.549192 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\": container with ID starting with 1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30 not found: ID does not exist" containerID="1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.549225 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30"} err="failed to get container status \"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\": rpc error: code = NotFound desc = could not find container \"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\": container with ID starting with 1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.549244 4760 scope.go:117] "RemoveContainer" containerID="fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e" Jan 23 18:13:36 crc kubenswrapper[4760]: E0123 18:13:36.549650 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\": container with ID starting with fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e not found: ID does not exist" containerID="fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.549679 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e"} err="failed to get container status \"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\": rpc error: code = NotFound desc = could not find container \"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\": container with ID starting with fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.549696 4760 scope.go:117] "RemoveContainer" containerID="31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de" Jan 23 18:13:36 crc kubenswrapper[4760]: E0123 18:13:36.549928 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\": container with ID starting with 31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de not found: ID does not exist" containerID="31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.549957 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de"} err="failed to get container status \"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\": rpc error: code = NotFound desc = could not find container \"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\": container with ID starting with 31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.549976 4760 scope.go:117] "RemoveContainer" containerID="6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1" Jan 23 18:13:36 crc kubenswrapper[4760]: E0123 18:13:36.550238 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\": container with ID starting with 6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1 not found: ID does not exist" containerID="6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.550267 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1"} err="failed to get container status \"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\": rpc error: code = NotFound desc = could not find container \"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\": container with ID starting with 6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.550283 4760 scope.go:117] "RemoveContainer" containerID="1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.550547 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8"} err="failed to get container status \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": rpc error: code = NotFound desc = could not find container \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": container with ID starting with 1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.550573 4760 scope.go:117] "RemoveContainer" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.550844 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f"} err="failed to get container status \"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\": rpc error: code = NotFound desc = could not find container \"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\": container with ID starting with cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.550869 4760 scope.go:117] "RemoveContainer" containerID="cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.551126 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5"} err="failed to get container status \"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\": rpc error: code = NotFound desc = could not find container \"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\": container with ID starting with cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.551151 4760 scope.go:117] "RemoveContainer" containerID="d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.551443 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb"} err="failed to get container status \"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\": rpc error: code = NotFound desc = could not find container \"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\": container with ID starting with d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.551471 4760 scope.go:117] "RemoveContainer" containerID="a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.551707 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2"} err="failed to get container status \"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\": rpc error: code = NotFound desc = could not find container \"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\": container with ID starting with a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.551732 4760 scope.go:117] "RemoveContainer" containerID="af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.551952 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f"} err="failed to get container status \"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\": rpc error: code = NotFound desc = could not find container \"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\": container with ID starting with af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.551981 4760 scope.go:117] "RemoveContainer" containerID="1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.552267 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30"} err="failed to get container status \"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\": rpc error: code = NotFound desc = could not find container \"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\": container with ID starting with 1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.552295 4760 scope.go:117] "RemoveContainer" containerID="fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.552600 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e"} err="failed to get container status \"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\": rpc error: code = NotFound desc = could not find container \"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\": container with ID starting with fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.552628 4760 scope.go:117] "RemoveContainer" containerID="31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.552851 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de"} err="failed to get container status \"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\": rpc error: code = NotFound desc = could not find container \"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\": container with ID starting with 31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.552879 4760 scope.go:117] "RemoveContainer" containerID="6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.553119 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1"} err="failed to get container status \"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\": rpc error: code = NotFound desc = could not find container \"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\": container with ID starting with 6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.553147 4760 scope.go:117] "RemoveContainer" containerID="1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.553385 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8"} err="failed to get container status \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": rpc error: code = NotFound desc = could not find container \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": container with ID starting with 1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.553454 4760 scope.go:117] "RemoveContainer" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.553718 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f"} err="failed to get container status \"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\": rpc error: code = NotFound desc = could not find container \"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\": container with ID starting with cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.553743 4760 scope.go:117] "RemoveContainer" containerID="cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.553980 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5"} err="failed to get container status \"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\": rpc error: code = NotFound desc = could not find container \"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\": container with ID starting with cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.554005 4760 scope.go:117] "RemoveContainer" containerID="d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.554268 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb"} err="failed to get container status \"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\": rpc error: code = NotFound desc = could not find container \"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\": container with ID starting with d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.554305 4760 scope.go:117] "RemoveContainer" containerID="a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.554535 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2"} err="failed to get container status \"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\": rpc error: code = NotFound desc = could not find container \"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\": container with ID starting with a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.554561 4760 scope.go:117] "RemoveContainer" containerID="af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.554797 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f"} err="failed to get container status \"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\": rpc error: code = NotFound desc = could not find container \"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\": container with ID starting with af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.554848 4760 scope.go:117] "RemoveContainer" containerID="1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.555161 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30"} err="failed to get container status \"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\": rpc error: code = NotFound desc = could not find container \"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\": container with ID starting with 1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.555201 4760 scope.go:117] "RemoveContainer" containerID="fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.555636 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e"} err="failed to get container status \"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\": rpc error: code = NotFound desc = could not find container \"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\": container with ID starting with fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.555677 4760 scope.go:117] "RemoveContainer" containerID="31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.555956 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de"} err="failed to get container status \"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\": rpc error: code = NotFound desc = could not find container \"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\": container with ID starting with 31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.555980 4760 scope.go:117] "RemoveContainer" containerID="6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.556260 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1"} err="failed to get container status \"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\": rpc error: code = NotFound desc = could not find container \"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\": container with ID starting with 6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.556290 4760 scope.go:117] "RemoveContainer" containerID="1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.556532 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8"} err="failed to get container status \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": rpc error: code = NotFound desc = could not find container \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": container with ID starting with 1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.556557 4760 scope.go:117] "RemoveContainer" containerID="cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.556825 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f"} err="failed to get container status \"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\": rpc error: code = NotFound desc = could not find container \"cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f\": container with ID starting with cdbdb8ae54cb8c4fa8e2c148493530b37b65cb13f8ffd62d71f53ab880b8977f not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.556855 4760 scope.go:117] "RemoveContainer" containerID="cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.557115 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5"} err="failed to get container status \"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\": rpc error: code = NotFound desc = could not find container \"cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5\": container with ID starting with cce5822823a7f26eaf32b91303fcfe6962be247759b4847c35a587d21d9e93b5 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.557143 4760 scope.go:117] "RemoveContainer" containerID="d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.557391 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb"} err="failed to get container status \"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\": rpc error: code = NotFound desc = could not find container \"d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb\": container with ID starting with d778882d96891caa8893627e6a5bca005028a598e59999e6de80343bd44760bb not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.557698 4760 scope.go:117] "RemoveContainer" containerID="a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.558024 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2"} err="failed to get container status \"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\": rpc error: code = NotFound desc = could not find container \"a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2\": container with ID starting with a6f51828dfeb67c38bd8b743738ee85a9c8727f0f9bcc8a66036edb3e522b5b2 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.558048 4760 scope.go:117] "RemoveContainer" containerID="af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.558351 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f"} err="failed to get container status \"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\": rpc error: code = NotFound desc = could not find container \"af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f\": container with ID starting with af819d8d42dbcee3eaec3c04d3b09f6b3ce63711904786d8d40e383ba8ed848f not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.558380 4760 scope.go:117] "RemoveContainer" containerID="1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.558626 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30"} err="failed to get container status \"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\": rpc error: code = NotFound desc = could not find container \"1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30\": container with ID starting with 1b3cf2d3a9c3c1cadd02b970de6e70858714f5a0154e6c6d226d16c634042b30 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.558652 4760 scope.go:117] "RemoveContainer" containerID="fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.558903 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e"} err="failed to get container status \"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\": rpc error: code = NotFound desc = could not find container \"fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e\": container with ID starting with fdac18f413ce48a15114f6bf2cc9dd609932b95d6e4ab7d8caaeb13d6fe3db6e not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.558928 4760 scope.go:117] "RemoveContainer" containerID="31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.559185 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de"} err="failed to get container status \"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\": rpc error: code = NotFound desc = could not find container \"31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de\": container with ID starting with 31f4ac1cab2cbee5994ff1a22997a2eddca8751928c84f9d7b180a10b61315de not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.559213 4760 scope.go:117] "RemoveContainer" containerID="6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.559514 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1"} err="failed to get container status \"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\": rpc error: code = NotFound desc = could not find container \"6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1\": container with ID starting with 6915e11bd400a0172d34e0daeaed0ab4a863d81ab00b89e186ad081d42879ee1 not found: ID does not exist" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.559544 4760 scope.go:117] "RemoveContainer" containerID="1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8" Jan 23 18:13:36 crc kubenswrapper[4760]: I0123 18:13:36.559788 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8"} err="failed to get container status \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": rpc error: code = NotFound desc = could not find container \"1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8\": container with ID starting with 1efe03bfcc1a0761c193becc3452940093a12c0141b0bd951ec73c315461dbb8 not found: ID does not exist" Jan 23 18:13:37 crc kubenswrapper[4760]: I0123 18:13:37.252348 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/2.log" Jan 23 18:13:37 crc kubenswrapper[4760]: I0123 18:13:37.253125 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/1.log" Jan 23 18:13:37 crc kubenswrapper[4760]: I0123 18:13:37.253202 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7ck54" event={"ID":"ac96490a-85b1-48f4-99d1-2b7505744007","Type":"ContainerStarted","Data":"ecb6d59853eb3a47bccf5a53a370a1768ce8dcd9f2aad42c219b2a2f2123031c"} Jan 23 18:13:37 crc kubenswrapper[4760]: I0123 18:13:37.255622 4760 generic.go:334] "Generic (PLEG): container finished" podID="09c0cc0a-dfab-4935-9f99-3da2334ee068" containerID="15e1e2e999dc773250d55b1d8ad1f0e6fddb4a2fd8750d8885fec1667b8a3d0f" exitCode=0 Jan 23 18:13:37 crc kubenswrapper[4760]: I0123 18:13:37.255682 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" event={"ID":"09c0cc0a-dfab-4935-9f99-3da2334ee068","Type":"ContainerDied","Data":"15e1e2e999dc773250d55b1d8ad1f0e6fddb4a2fd8750d8885fec1667b8a3d0f"} Jan 23 18:13:37 crc kubenswrapper[4760]: I0123 18:13:37.255760 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" event={"ID":"09c0cc0a-dfab-4935-9f99-3da2334ee068","Type":"ContainerStarted","Data":"48f72d3064112b2261df41445fb941651ef1af0a5c528087468787d5fc9a69ef"} Jan 23 18:13:37 crc kubenswrapper[4760]: I0123 18:13:37.606304 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a394da-f311-4268-9011-d781ba14cb3f" path="/var/lib/kubelet/pods/03a394da-f311-4268-9011-d781ba14cb3f/volumes" Jan 23 18:13:38 crc kubenswrapper[4760]: I0123 18:13:38.263032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" event={"ID":"09c0cc0a-dfab-4935-9f99-3da2334ee068","Type":"ContainerStarted","Data":"b0485c952bdc34076ccb79432f86a1b18ea4c14650689684781b57c74cafb72d"} Jan 23 18:13:38 crc kubenswrapper[4760]: I0123 18:13:38.263922 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" event={"ID":"09c0cc0a-dfab-4935-9f99-3da2334ee068","Type":"ContainerStarted","Data":"2c920d2c902e175e92423d481ec0987ddef7ee7efcc755ecc2bc490e0561237e"} Jan 23 18:13:38 crc kubenswrapper[4760]: I0123 18:13:38.264022 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" event={"ID":"09c0cc0a-dfab-4935-9f99-3da2334ee068","Type":"ContainerStarted","Data":"53f81b422e1923eb032aa5b138163dae9adc3d5e82a208eb3394ca5090dcdfb9"} Jan 23 18:13:39 crc kubenswrapper[4760]: I0123 18:13:39.271465 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" event={"ID":"09c0cc0a-dfab-4935-9f99-3da2334ee068","Type":"ContainerStarted","Data":"a7de0dacecd659087a3e8744975a73eb74b53becb6aeae17735bbad4a03912af"} Jan 23 18:13:39 crc kubenswrapper[4760]: I0123 18:13:39.272528 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" event={"ID":"09c0cc0a-dfab-4935-9f99-3da2334ee068","Type":"ContainerStarted","Data":"71494c5fd51cf9de211abcc78c03df26cd953b46f46fd1f467aaf85b8a96b8d8"} Jan 23 18:13:39 crc kubenswrapper[4760]: I0123 18:13:39.272552 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" event={"ID":"09c0cc0a-dfab-4935-9f99-3da2334ee068","Type":"ContainerStarted","Data":"7e5ca5fec9f374dd5984ab553b77dc1e2e08c88a154b8338efc405c59b29c8b9"} Jan 23 18:13:41 crc kubenswrapper[4760]: I0123 18:13:41.270450 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 18:13:41 crc kubenswrapper[4760]: I0123 18:13:41.285687 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" event={"ID":"09c0cc0a-dfab-4935-9f99-3da2334ee068","Type":"ContainerStarted","Data":"6257584de5a80a92f6371167e3b19d35ce4fb4a9c905c4d24b68f173425e88ca"} Jan 23 18:13:41 crc kubenswrapper[4760]: I0123 18:13:41.577710 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-f4cdr" Jan 23 18:13:44 crc kubenswrapper[4760]: I0123 18:13:44.310647 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" event={"ID":"09c0cc0a-dfab-4935-9f99-3da2334ee068","Type":"ContainerStarted","Data":"e231d02168a89c8b5b65ad6956604ab420bc8d031790dc68f7b593e3b2163325"} Jan 23 18:13:45 crc kubenswrapper[4760]: I0123 18:13:45.316702 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:45 crc kubenswrapper[4760]: I0123 18:13:45.316746 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:45 crc kubenswrapper[4760]: I0123 18:13:45.354884 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" podStartSLOduration=10.354856533 podStartE2EDuration="10.354856533s" podCreationTimestamp="2026-01-23 18:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:13:45.348055092 +0000 UTC m=+768.350513105" watchObservedRunningTime="2026-01-23 18:13:45.354856533 +0000 UTC m=+768.357314486" Jan 23 18:13:45 crc kubenswrapper[4760]: I0123 18:13:45.363146 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:46 crc kubenswrapper[4760]: I0123 18:13:46.076032 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:13:46 crc kubenswrapper[4760]: I0123 18:13:46.076111 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:13:46 crc kubenswrapper[4760]: I0123 18:13:46.263741 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:46 crc kubenswrapper[4760]: I0123 18:13:46.287580 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:13:57 crc kubenswrapper[4760]: I0123 18:13:57.813384 4760 scope.go:117] "RemoveContainer" containerID="02935f40f10269de8f668fe9afe8de2cb096a54039beb6763a876d286e0d819b" Jan 23 18:13:58 crc kubenswrapper[4760]: I0123 18:13:58.394822 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7ck54_ac96490a-85b1-48f4-99d1-2b7505744007/kube-multus/2.log" Jan 23 18:14:06 crc kubenswrapper[4760]: I0123 18:14:06.295217 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hlvrp" Jan 23 18:14:16 crc kubenswrapper[4760]: I0123 18:14:16.075661 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:14:16 crc kubenswrapper[4760]: I0123 18:14:16.076960 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.203896 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7"] Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.205373 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.208756 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.217703 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7"] Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.276913 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.276960 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.276987 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxt8g\" (UniqueName: \"kubernetes.io/projected/95b32adf-ef73-48db-9961-c29c4a278ff5-kube-api-access-bxt8g\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.378258 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.378326 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.378357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxt8g\" (UniqueName: \"kubernetes.io/projected/95b32adf-ef73-48db-9961-c29c4a278ff5-kube-api-access-bxt8g\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.379011 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.379183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.401066 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxt8g\" (UniqueName: \"kubernetes.io/projected/95b32adf-ef73-48db-9961-c29c4a278ff5-kube-api-access-bxt8g\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.522317 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:28 crc kubenswrapper[4760]: I0123 18:14:28.935781 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7"] Jan 23 18:14:29 crc kubenswrapper[4760]: I0123 18:14:29.581573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" event={"ID":"95b32adf-ef73-48db-9961-c29c4a278ff5","Type":"ContainerStarted","Data":"5589e66d2d786729e916d0472bda998615550173ba24b4658886d2f516315e74"} Jan 23 18:14:29 crc kubenswrapper[4760]: I0123 18:14:29.581629 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" event={"ID":"95b32adf-ef73-48db-9961-c29c4a278ff5","Type":"ContainerStarted","Data":"ec6eac82efb00dfa2f89f89932c86f459f3cb25ac0053000f6189da113009646"} Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.496355 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g97lz"] Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.498827 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.503797 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-catalog-content\") pod \"redhat-operators-g97lz\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.503863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5lj7\" (UniqueName: \"kubernetes.io/projected/b9d75799-fd2e-446c-913d-552115ae872c-kube-api-access-l5lj7\") pod \"redhat-operators-g97lz\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.503902 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-utilities\") pod \"redhat-operators-g97lz\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.508518 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g97lz"] Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.588032 4760 generic.go:334] "Generic (PLEG): container finished" podID="95b32adf-ef73-48db-9961-c29c4a278ff5" containerID="5589e66d2d786729e916d0472bda998615550173ba24b4658886d2f516315e74" exitCode=0 Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.588087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" event={"ID":"95b32adf-ef73-48db-9961-c29c4a278ff5","Type":"ContainerDied","Data":"5589e66d2d786729e916d0472bda998615550173ba24b4658886d2f516315e74"} Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.604870 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-catalog-content\") pod \"redhat-operators-g97lz\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.604953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5lj7\" (UniqueName: \"kubernetes.io/projected/b9d75799-fd2e-446c-913d-552115ae872c-kube-api-access-l5lj7\") pod \"redhat-operators-g97lz\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.605010 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-utilities\") pod \"redhat-operators-g97lz\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.605603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-utilities\") pod \"redhat-operators-g97lz\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.605659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-catalog-content\") pod \"redhat-operators-g97lz\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.630007 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5lj7\" (UniqueName: \"kubernetes.io/projected/b9d75799-fd2e-446c-913d-552115ae872c-kube-api-access-l5lj7\") pod \"redhat-operators-g97lz\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:30 crc kubenswrapper[4760]: I0123 18:14:30.822791 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:31 crc kubenswrapper[4760]: I0123 18:14:31.026102 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g97lz"] Jan 23 18:14:31 crc kubenswrapper[4760]: W0123 18:14:31.030017 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d75799_fd2e_446c_913d_552115ae872c.slice/crio-47f3c520993707fe96749422ebf3f58a9d011fbf0f3779e39c5e98ace4d2806a WatchSource:0}: Error finding container 47f3c520993707fe96749422ebf3f58a9d011fbf0f3779e39c5e98ace4d2806a: Status 404 returned error can't find the container with id 47f3c520993707fe96749422ebf3f58a9d011fbf0f3779e39c5e98ace4d2806a Jan 23 18:14:31 crc kubenswrapper[4760]: I0123 18:14:31.594802 4760 generic.go:334] "Generic (PLEG): container finished" podID="b9d75799-fd2e-446c-913d-552115ae872c" containerID="9d112ab81d6a6822553de1743d1e3ca5ee0692e3ba230788fb83d648e50d17c7" exitCode=0 Jan 23 18:14:31 crc kubenswrapper[4760]: I0123 18:14:31.601893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g97lz" event={"ID":"b9d75799-fd2e-446c-913d-552115ae872c","Type":"ContainerDied","Data":"9d112ab81d6a6822553de1743d1e3ca5ee0692e3ba230788fb83d648e50d17c7"} Jan 23 18:14:31 crc kubenswrapper[4760]: I0123 18:14:31.601945 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g97lz" event={"ID":"b9d75799-fd2e-446c-913d-552115ae872c","Type":"ContainerStarted","Data":"47f3c520993707fe96749422ebf3f58a9d011fbf0f3779e39c5e98ace4d2806a"} Jan 23 18:14:32 crc kubenswrapper[4760]: I0123 18:14:32.604450 4760 generic.go:334] "Generic (PLEG): container finished" podID="95b32adf-ef73-48db-9961-c29c4a278ff5" containerID="e6b40500884039680e586c5ce2aad842817e3c36fd72fd382a1d2f26c2472b78" exitCode=0 Jan 23 18:14:32 crc kubenswrapper[4760]: I0123 18:14:32.604524 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" event={"ID":"95b32adf-ef73-48db-9961-c29c4a278ff5","Type":"ContainerDied","Data":"e6b40500884039680e586c5ce2aad842817e3c36fd72fd382a1d2f26c2472b78"} Jan 23 18:14:33 crc kubenswrapper[4760]: I0123 18:14:33.613714 4760 generic.go:334] "Generic (PLEG): container finished" podID="95b32adf-ef73-48db-9961-c29c4a278ff5" containerID="49dee7ae2ccdbd66f5ac3550f79661c56f1104c615bff50eccc5bad6eac54f38" exitCode=0 Jan 23 18:14:33 crc kubenswrapper[4760]: I0123 18:14:33.613792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" event={"ID":"95b32adf-ef73-48db-9961-c29c4a278ff5","Type":"ContainerDied","Data":"49dee7ae2ccdbd66f5ac3550f79661c56f1104c615bff50eccc5bad6eac54f38"} Jan 23 18:14:33 crc kubenswrapper[4760]: I0123 18:14:33.616092 4760 generic.go:334] "Generic (PLEG): container finished" podID="b9d75799-fd2e-446c-913d-552115ae872c" containerID="b08757a1a7a21d34d9c6941f825bff9d6aaed9aeff2769542a984243879f4288" exitCode=0 Jan 23 18:14:33 crc kubenswrapper[4760]: I0123 18:14:33.616291 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g97lz" event={"ID":"b9d75799-fd2e-446c-913d-552115ae872c","Type":"ContainerDied","Data":"b08757a1a7a21d34d9c6941f825bff9d6aaed9aeff2769542a984243879f4288"} Jan 23 18:14:34 crc kubenswrapper[4760]: I0123 18:14:34.629065 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g97lz" event={"ID":"b9d75799-fd2e-446c-913d-552115ae872c","Type":"ContainerStarted","Data":"842f1f3c3a04659e4625f37b2f984408ecc5f65656cd4cbfdc68f834e40f6dfe"} Jan 23 18:14:34 crc kubenswrapper[4760]: I0123 18:14:34.649665 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g97lz" podStartSLOduration=2.171409502 podStartE2EDuration="4.649638628s" podCreationTimestamp="2026-01-23 18:14:30 +0000 UTC" firstStartedPulling="2026-01-23 18:14:31.596499425 +0000 UTC m=+814.598957358" lastFinishedPulling="2026-01-23 18:14:34.074728551 +0000 UTC m=+817.077186484" observedRunningTime="2026-01-23 18:14:34.647563849 +0000 UTC m=+817.650021782" watchObservedRunningTime="2026-01-23 18:14:34.649638628 +0000 UTC m=+817.652096601" Jan 23 18:14:34 crc kubenswrapper[4760]: I0123 18:14:34.876267 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.059817 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-bundle\") pod \"95b32adf-ef73-48db-9961-c29c4a278ff5\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.059931 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-util\") pod \"95b32adf-ef73-48db-9961-c29c4a278ff5\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.059989 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxt8g\" (UniqueName: \"kubernetes.io/projected/95b32adf-ef73-48db-9961-c29c4a278ff5-kube-api-access-bxt8g\") pod \"95b32adf-ef73-48db-9961-c29c4a278ff5\" (UID: \"95b32adf-ef73-48db-9961-c29c4a278ff5\") " Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.060878 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-bundle" (OuterVolumeSpecName: "bundle") pod "95b32adf-ef73-48db-9961-c29c4a278ff5" (UID: "95b32adf-ef73-48db-9961-c29c4a278ff5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.066611 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b32adf-ef73-48db-9961-c29c4a278ff5-kube-api-access-bxt8g" (OuterVolumeSpecName: "kube-api-access-bxt8g") pod "95b32adf-ef73-48db-9961-c29c4a278ff5" (UID: "95b32adf-ef73-48db-9961-c29c4a278ff5"). InnerVolumeSpecName "kube-api-access-bxt8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.075141 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-util" (OuterVolumeSpecName: "util") pod "95b32adf-ef73-48db-9961-c29c4a278ff5" (UID: "95b32adf-ef73-48db-9961-c29c4a278ff5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.161337 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-util\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.161391 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxt8g\" (UniqueName: \"kubernetes.io/projected/95b32adf-ef73-48db-9961-c29c4a278ff5-kube-api-access-bxt8g\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.161402 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95b32adf-ef73-48db-9961-c29c4a278ff5-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.636452 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" event={"ID":"95b32adf-ef73-48db-9961-c29c4a278ff5","Type":"ContainerDied","Data":"ec6eac82efb00dfa2f89f89932c86f459f3cb25ac0053000f6189da113009646"} Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.636496 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec6eac82efb00dfa2f89f89932c86f459f3cb25ac0053000f6189da113009646" Jan 23 18:14:35 crc kubenswrapper[4760]: I0123 18:14:35.636507 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.419758 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cnl5v"] Jan 23 18:14:38 crc kubenswrapper[4760]: E0123 18:14:38.420264 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b32adf-ef73-48db-9961-c29c4a278ff5" containerName="util" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.420276 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b32adf-ef73-48db-9961-c29c4a278ff5" containerName="util" Jan 23 18:14:38 crc kubenswrapper[4760]: E0123 18:14:38.420293 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b32adf-ef73-48db-9961-c29c4a278ff5" containerName="pull" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.420299 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b32adf-ef73-48db-9961-c29c4a278ff5" containerName="pull" Jan 23 18:14:38 crc kubenswrapper[4760]: E0123 18:14:38.420312 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b32adf-ef73-48db-9961-c29c4a278ff5" containerName="extract" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.420317 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b32adf-ef73-48db-9961-c29c4a278ff5" containerName="extract" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.420453 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b32adf-ef73-48db-9961-c29c4a278ff5" containerName="extract" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.420829 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cnl5v" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.422830 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.423225 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.423254 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-c98sj" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.425093 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jphf\" (UniqueName: \"kubernetes.io/projected/7cffdf90-9546-4761-bb9d-c4c6da9dffa7-kube-api-access-5jphf\") pod \"nmstate-operator-646758c888-cnl5v\" (UID: \"7cffdf90-9546-4761-bb9d-c4c6da9dffa7\") " pod="openshift-nmstate/nmstate-operator-646758c888-cnl5v" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.435749 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cnl5v"] Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.526285 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jphf\" (UniqueName: \"kubernetes.io/projected/7cffdf90-9546-4761-bb9d-c4c6da9dffa7-kube-api-access-5jphf\") pod \"nmstate-operator-646758c888-cnl5v\" (UID: \"7cffdf90-9546-4761-bb9d-c4c6da9dffa7\") " pod="openshift-nmstate/nmstate-operator-646758c888-cnl5v" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.544278 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jphf\" (UniqueName: \"kubernetes.io/projected/7cffdf90-9546-4761-bb9d-c4c6da9dffa7-kube-api-access-5jphf\") pod \"nmstate-operator-646758c888-cnl5v\" (UID: \"7cffdf90-9546-4761-bb9d-c4c6da9dffa7\") " pod="openshift-nmstate/nmstate-operator-646758c888-cnl5v" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.735091 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cnl5v" Jan 23 18:14:38 crc kubenswrapper[4760]: I0123 18:14:38.925039 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cnl5v"] Jan 23 18:14:39 crc kubenswrapper[4760]: I0123 18:14:39.663286 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cnl5v" event={"ID":"7cffdf90-9546-4761-bb9d-c4c6da9dffa7","Type":"ContainerStarted","Data":"f34370fe2a1bcd1f7a977952f0958f8b8c416d5f6d5923488da5328c59083842"} Jan 23 18:14:40 crc kubenswrapper[4760]: I0123 18:14:40.823968 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:40 crc kubenswrapper[4760]: I0123 18:14:40.824040 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:40 crc kubenswrapper[4760]: I0123 18:14:40.862998 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:41 crc kubenswrapper[4760]: I0123 18:14:41.723312 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:42 crc kubenswrapper[4760]: I0123 18:14:42.684986 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cnl5v" event={"ID":"7cffdf90-9546-4761-bb9d-c4c6da9dffa7","Type":"ContainerStarted","Data":"c42efaeb702fb1df76b64789a8e8b791fa79690bcffede35d04a55feb0227af0"} Jan 23 18:14:42 crc kubenswrapper[4760]: I0123 18:14:42.704053 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-cnl5v" podStartSLOduration=2.008315844 podStartE2EDuration="4.704030696s" podCreationTimestamp="2026-01-23 18:14:38 +0000 UTC" firstStartedPulling="2026-01-23 18:14:38.941061264 +0000 UTC m=+821.943519197" lastFinishedPulling="2026-01-23 18:14:41.636776116 +0000 UTC m=+824.639234049" observedRunningTime="2026-01-23 18:14:42.701840394 +0000 UTC m=+825.704298337" watchObservedRunningTime="2026-01-23 18:14:42.704030696 +0000 UTC m=+825.706488629" Jan 23 18:14:43 crc kubenswrapper[4760]: I0123 18:14:43.285118 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g97lz"] Jan 23 18:14:43 crc kubenswrapper[4760]: I0123 18:14:43.702872 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g97lz" podUID="b9d75799-fd2e-446c-913d-552115ae872c" containerName="registry-server" containerID="cri-o://842f1f3c3a04659e4625f37b2f984408ecc5f65656cd4cbfdc68f834e40f6dfe" gracePeriod=2 Jan 23 18:14:46 crc kubenswrapper[4760]: I0123 18:14:46.075378 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:14:46 crc kubenswrapper[4760]: I0123 18:14:46.075889 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:14:46 crc kubenswrapper[4760]: I0123 18:14:46.075985 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:14:46 crc kubenswrapper[4760]: I0123 18:14:46.076920 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba503457cf1516c95b31c578f53ac143902b2c5fe146afa02b8c1856b3d9d060"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:14:46 crc kubenswrapper[4760]: I0123 18:14:46.077021 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://ba503457cf1516c95b31c578f53ac143902b2c5fe146afa02b8c1856b3d9d060" gracePeriod=600 Jan 23 18:14:46 crc kubenswrapper[4760]: I0123 18:14:46.734391 4760 generic.go:334] "Generic (PLEG): container finished" podID="b9d75799-fd2e-446c-913d-552115ae872c" containerID="842f1f3c3a04659e4625f37b2f984408ecc5f65656cd4cbfdc68f834e40f6dfe" exitCode=0 Jan 23 18:14:46 crc kubenswrapper[4760]: I0123 18:14:46.734457 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g97lz" event={"ID":"b9d75799-fd2e-446c-913d-552115ae872c","Type":"ContainerDied","Data":"842f1f3c3a04659e4625f37b2f984408ecc5f65656cd4cbfdc68f834e40f6dfe"} Jan 23 18:14:47 crc kubenswrapper[4760]: I0123 18:14:47.741910 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="ba503457cf1516c95b31c578f53ac143902b2c5fe146afa02b8c1856b3d9d060" exitCode=0 Jan 23 18:14:47 crc kubenswrapper[4760]: I0123 18:14:47.741968 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"ba503457cf1516c95b31c578f53ac143902b2c5fe146afa02b8c1856b3d9d060"} Jan 23 18:14:47 crc kubenswrapper[4760]: I0123 18:14:47.742519 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"fd7531a7445766d1859395c87897c2fd5d7fec89de4fdbffda0e57724c6d100c"} Jan 23 18:14:47 crc kubenswrapper[4760]: I0123 18:14:47.742542 4760 scope.go:117] "RemoveContainer" containerID="5c263e7f736d504cb47d3c8ca1d08a88c9a005d4eb5b95dcb5630ea2222a2389" Jan 23 18:14:47 crc kubenswrapper[4760]: I0123 18:14:47.809726 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:47 crc kubenswrapper[4760]: I0123 18:14:47.954108 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5lj7\" (UniqueName: \"kubernetes.io/projected/b9d75799-fd2e-446c-913d-552115ae872c-kube-api-access-l5lj7\") pod \"b9d75799-fd2e-446c-913d-552115ae872c\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " Jan 23 18:14:47 crc kubenswrapper[4760]: I0123 18:14:47.954226 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-catalog-content\") pod \"b9d75799-fd2e-446c-913d-552115ae872c\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " Jan 23 18:14:47 crc kubenswrapper[4760]: I0123 18:14:47.954273 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-utilities\") pod \"b9d75799-fd2e-446c-913d-552115ae872c\" (UID: \"b9d75799-fd2e-446c-913d-552115ae872c\") " Jan 23 18:14:47 crc kubenswrapper[4760]: I0123 18:14:47.964833 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d75799-fd2e-446c-913d-552115ae872c-kube-api-access-l5lj7" (OuterVolumeSpecName: "kube-api-access-l5lj7") pod "b9d75799-fd2e-446c-913d-552115ae872c" (UID: "b9d75799-fd2e-446c-913d-552115ae872c"). InnerVolumeSpecName "kube-api-access-l5lj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:14:47 crc kubenswrapper[4760]: I0123 18:14:47.972991 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-utilities" (OuterVolumeSpecName: "utilities") pod "b9d75799-fd2e-446c-913d-552115ae872c" (UID: "b9d75799-fd2e-446c-913d-552115ae872c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.060863 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.060901 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5lj7\" (UniqueName: \"kubernetes.io/projected/b9d75799-fd2e-446c-913d-552115ae872c-kube-api-access-l5lj7\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.084334 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9d75799-fd2e-446c-913d-552115ae872c" (UID: "b9d75799-fd2e-446c-913d-552115ae872c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.162321 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d75799-fd2e-446c-913d-552115ae872c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.529912 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rj872"] Jan 23 18:14:48 crc kubenswrapper[4760]: E0123 18:14:48.530493 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d75799-fd2e-446c-913d-552115ae872c" containerName="registry-server" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.530511 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d75799-fd2e-446c-913d-552115ae872c" containerName="registry-server" Jan 23 18:14:48 crc kubenswrapper[4760]: E0123 18:14:48.530523 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d75799-fd2e-446c-913d-552115ae872c" containerName="extract-content" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.530530 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d75799-fd2e-446c-913d-552115ae872c" containerName="extract-content" Jan 23 18:14:48 crc kubenswrapper[4760]: E0123 18:14:48.530547 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d75799-fd2e-446c-913d-552115ae872c" containerName="extract-utilities" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.530568 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d75799-fd2e-446c-913d-552115ae872c" containerName="extract-utilities" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.530672 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d75799-fd2e-446c-913d-552115ae872c" containerName="registry-server" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.531350 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj872" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.537743 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hznrz" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.542105 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rj872"] Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.551183 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp"] Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.552028 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.555467 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.567134 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-j7k9r"] Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.567968 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.572564 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp"] Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.666819 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ddb8\" (UniqueName: \"kubernetes.io/projected/5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea-kube-api-access-6ddb8\") pod \"nmstate-webhook-8474b5b9d8-rbsvp\" (UID: \"5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.666865 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rbsvp\" (UID: \"5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.666959 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6w7\" (UniqueName: \"kubernetes.io/projected/02fddd4e-8b61-4c07-b08d-f8ab8a2799ba-kube-api-access-nj6w7\") pod \"nmstate-metrics-54757c584b-rj872\" (UID: \"02fddd4e-8b61-4c07-b08d-f8ab8a2799ba\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rj872" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.667520 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28"] Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.668318 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.671101 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.671462 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rs6rq" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.673773 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.675636 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28"] Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.752060 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g97lz" event={"ID":"b9d75799-fd2e-446c-913d-552115ae872c","Type":"ContainerDied","Data":"47f3c520993707fe96749422ebf3f58a9d011fbf0f3779e39c5e98ace4d2806a"} Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.752096 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g97lz" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.752121 4760 scope.go:117] "RemoveContainer" containerID="842f1f3c3a04659e4625f37b2f984408ecc5f65656cd4cbfdc68f834e40f6dfe" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.768008 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj44w\" (UniqueName: \"kubernetes.io/projected/0627471a-680e-425a-a2de-e4e8d1b4e956-kube-api-access-hj44w\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.768105 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6w7\" (UniqueName: \"kubernetes.io/projected/02fddd4e-8b61-4c07-b08d-f8ab8a2799ba-kube-api-access-nj6w7\") pod \"nmstate-metrics-54757c584b-rj872\" (UID: \"02fddd4e-8b61-4c07-b08d-f8ab8a2799ba\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rj872" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.768137 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0627471a-680e-425a-a2de-e4e8d1b4e956-nmstate-lock\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.768181 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0627471a-680e-425a-a2de-e4e8d1b4e956-dbus-socket\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.768216 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ddb8\" (UniqueName: \"kubernetes.io/projected/5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea-kube-api-access-6ddb8\") pod \"nmstate-webhook-8474b5b9d8-rbsvp\" (UID: \"5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.768240 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rbsvp\" (UID: \"5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.768270 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0627471a-680e-425a-a2de-e4e8d1b4e956-ovs-socket\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: E0123 18:14:48.768706 4760 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 23 18:14:48 crc kubenswrapper[4760]: E0123 18:14:48.768780 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea-tls-key-pair podName:5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea nodeName:}" failed. No retries permitted until 2026-01-23 18:14:49.268760156 +0000 UTC m=+832.271218089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-rbsvp" (UID: "5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea") : secret "openshift-nmstate-webhook" not found Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.776258 4760 scope.go:117] "RemoveContainer" containerID="b08757a1a7a21d34d9c6941f825bff9d6aaed9aeff2769542a984243879f4288" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.793159 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g97lz"] Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.803645 4760 scope.go:117] "RemoveContainer" containerID="9d112ab81d6a6822553de1743d1e3ca5ee0692e3ba230788fb83d648e50d17c7" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.815377 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g97lz"] Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.823817 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ddb8\" (UniqueName: \"kubernetes.io/projected/5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea-kube-api-access-6ddb8\") pod \"nmstate-webhook-8474b5b9d8-rbsvp\" (UID: \"5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.824809 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6w7\" (UniqueName: \"kubernetes.io/projected/02fddd4e-8b61-4c07-b08d-f8ab8a2799ba-kube-api-access-nj6w7\") pod \"nmstate-metrics-54757c584b-rj872\" (UID: \"02fddd4e-8b61-4c07-b08d-f8ab8a2799ba\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rj872" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.845486 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj872" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.869220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/45b6025c-fe75-4723-8f72-7ef9d4414827-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qlk28\" (UID: \"45b6025c-fe75-4723-8f72-7ef9d4414827\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.869298 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0627471a-680e-425a-a2de-e4e8d1b4e956-nmstate-lock\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.869353 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0627471a-680e-425a-a2de-e4e8d1b4e956-nmstate-lock\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.869394 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0627471a-680e-425a-a2de-e4e8d1b4e956-dbus-socket\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.869553 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0627471a-680e-425a-a2de-e4e8d1b4e956-ovs-socket\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.869598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/45b6025c-fe75-4723-8f72-7ef9d4414827-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qlk28\" (UID: \"45b6025c-fe75-4723-8f72-7ef9d4414827\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.869661 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj44w\" (UniqueName: \"kubernetes.io/projected/0627471a-680e-425a-a2de-e4e8d1b4e956-kube-api-access-hj44w\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.869716 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx9gf\" (UniqueName: \"kubernetes.io/projected/45b6025c-fe75-4723-8f72-7ef9d4414827-kube-api-access-rx9gf\") pod \"nmstate-console-plugin-7754f76f8b-qlk28\" (UID: \"45b6025c-fe75-4723-8f72-7ef9d4414827\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.870177 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0627471a-680e-425a-a2de-e4e8d1b4e956-dbus-socket\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.870217 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0627471a-680e-425a-a2de-e4e8d1b4e956-ovs-socket\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.899101 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj44w\" (UniqueName: \"kubernetes.io/projected/0627471a-680e-425a-a2de-e4e8d1b4e956-kube-api-access-hj44w\") pod \"nmstate-handler-j7k9r\" (UID: \"0627471a-680e-425a-a2de-e4e8d1b4e956\") " pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.901237 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6d7f6cdcd7-qrn4c"] Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.901876 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.919904 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d7f6cdcd7-qrn4c"] Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.970523 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx9gf\" (UniqueName: \"kubernetes.io/projected/45b6025c-fe75-4723-8f72-7ef9d4414827-kube-api-access-rx9gf\") pod \"nmstate-console-plugin-7754f76f8b-qlk28\" (UID: \"45b6025c-fe75-4723-8f72-7ef9d4414827\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.970593 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/45b6025c-fe75-4723-8f72-7ef9d4414827-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qlk28\" (UID: \"45b6025c-fe75-4723-8f72-7ef9d4414827\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.970685 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/45b6025c-fe75-4723-8f72-7ef9d4414827-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qlk28\" (UID: \"45b6025c-fe75-4723-8f72-7ef9d4414827\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.971646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/45b6025c-fe75-4723-8f72-7ef9d4414827-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qlk28\" (UID: \"45b6025c-fe75-4723-8f72-7ef9d4414827\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.975237 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/45b6025c-fe75-4723-8f72-7ef9d4414827-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qlk28\" (UID: \"45b6025c-fe75-4723-8f72-7ef9d4414827\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:48 crc kubenswrapper[4760]: I0123 18:14:48.997906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx9gf\" (UniqueName: \"kubernetes.io/projected/45b6025c-fe75-4723-8f72-7ef9d4414827-kube-api-access-rx9gf\") pod \"nmstate-console-plugin-7754f76f8b-qlk28\" (UID: \"45b6025c-fe75-4723-8f72-7ef9d4414827\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.071984 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-oauth-serving-cert\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.073055 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/548df0df-23bd-4130-b95e-5fb32a95b750-console-oauth-config\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.073226 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/548df0df-23bd-4130-b95e-5fb32a95b750-console-serving-cert\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.073370 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-console-config\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.073515 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-trusted-ca-bundle\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.073691 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhvq\" (UniqueName: \"kubernetes.io/projected/548df0df-23bd-4130-b95e-5fb32a95b750-kube-api-access-nzhvq\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.073799 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-service-ca\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.081195 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rj872"] Jan 23 18:14:49 crc kubenswrapper[4760]: W0123 18:14:49.087577 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02fddd4e_8b61_4c07_b08d_f8ab8a2799ba.slice/crio-4a2f98ef336bf993063fa458e5d45ddb60240aec68b15f71d9793519fd42ba44 WatchSource:0}: Error finding container 4a2f98ef336bf993063fa458e5d45ddb60240aec68b15f71d9793519fd42ba44: Status 404 returned error can't find the container with id 4a2f98ef336bf993063fa458e5d45ddb60240aec68b15f71d9793519fd42ba44 Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.174452 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-trusted-ca-bundle\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.174844 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzhvq\" (UniqueName: \"kubernetes.io/projected/548df0df-23bd-4130-b95e-5fb32a95b750-kube-api-access-nzhvq\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.174875 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-service-ca\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.175385 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-oauth-serving-cert\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.175740 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-service-ca\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.176118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-oauth-serving-cert\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.176223 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/548df0df-23bd-4130-b95e-5fb32a95b750-console-oauth-config\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.176290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/548df0df-23bd-4130-b95e-5fb32a95b750-console-serving-cert\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.176936 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-console-config\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.177636 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-console-config\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.177636 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/548df0df-23bd-4130-b95e-5fb32a95b750-trusted-ca-bundle\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.183382 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/548df0df-23bd-4130-b95e-5fb32a95b750-console-oauth-config\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.183603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/548df0df-23bd-4130-b95e-5fb32a95b750-console-serving-cert\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.183896 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.192795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzhvq\" (UniqueName: \"kubernetes.io/projected/548df0df-23bd-4130-b95e-5fb32a95b750-kube-api-access-nzhvq\") pod \"console-6d7f6cdcd7-qrn4c\" (UID: \"548df0df-23bd-4130-b95e-5fb32a95b750\") " pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: W0123 18:14:49.201997 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0627471a_680e_425a_a2de_e4e8d1b4e956.slice/crio-6723d04e98fab21f5cdaba8a47a87b4f12a88aa039b8a68e60114dc81636dca6 WatchSource:0}: Error finding container 6723d04e98fab21f5cdaba8a47a87b4f12a88aa039b8a68e60114dc81636dca6: Status 404 returned error can't find the container with id 6723d04e98fab21f5cdaba8a47a87b4f12a88aa039b8a68e60114dc81636dca6 Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.224652 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.277701 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rbsvp\" (UID: \"5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.281120 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rbsvp\" (UID: \"5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.282970 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.450272 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28"] Jan 23 18:14:49 crc kubenswrapper[4760]: W0123 18:14:49.456583 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45b6025c_fe75_4723_8f72_7ef9d4414827.slice/crio-ddaa9d64b2cecf06f767be45059baa6a1da99cb84702d21cf41a8cfff3cea7c3 WatchSource:0}: Error finding container ddaa9d64b2cecf06f767be45059baa6a1da99cb84702d21cf41a8cfff3cea7c3: Status 404 returned error can't find the container with id ddaa9d64b2cecf06f767be45059baa6a1da99cb84702d21cf41a8cfff3cea7c3 Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.466259 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.610218 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9d75799-fd2e-446c-913d-552115ae872c" path="/var/lib/kubelet/pods/b9d75799-fd2e-446c-913d-552115ae872c/volumes" Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.617319 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d7f6cdcd7-qrn4c"] Jan 23 18:14:49 crc kubenswrapper[4760]: W0123 18:14:49.633218 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod548df0df_23bd_4130_b95e_5fb32a95b750.slice/crio-d6df6d201a3ceb34b298506de83c0efe91dd85375339c5750143f29016961a79 WatchSource:0}: Error finding container d6df6d201a3ceb34b298506de83c0efe91dd85375339c5750143f29016961a79: Status 404 returned error can't find the container with id d6df6d201a3ceb34b298506de83c0efe91dd85375339c5750143f29016961a79 Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.647112 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp"] Jan 23 18:14:49 crc kubenswrapper[4760]: W0123 18:14:49.655702 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d6ac3e7_df4e_4ebc_981e_fc737f3a55ea.slice/crio-d349623b9668c467ad5beba83e8de8201853928541fedd72a96dc7bf93ff2379 WatchSource:0}: Error finding container d349623b9668c467ad5beba83e8de8201853928541fedd72a96dc7bf93ff2379: Status 404 returned error can't find the container with id d349623b9668c467ad5beba83e8de8201853928541fedd72a96dc7bf93ff2379 Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.759464 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-j7k9r" event={"ID":"0627471a-680e-425a-a2de-e4e8d1b4e956","Type":"ContainerStarted","Data":"6723d04e98fab21f5cdaba8a47a87b4f12a88aa039b8a68e60114dc81636dca6"} Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.761543 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d7f6cdcd7-qrn4c" event={"ID":"548df0df-23bd-4130-b95e-5fb32a95b750","Type":"ContainerStarted","Data":"d6df6d201a3ceb34b298506de83c0efe91dd85375339c5750143f29016961a79"} Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.762641 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj872" event={"ID":"02fddd4e-8b61-4c07-b08d-f8ab8a2799ba","Type":"ContainerStarted","Data":"4a2f98ef336bf993063fa458e5d45ddb60240aec68b15f71d9793519fd42ba44"} Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.763613 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" event={"ID":"45b6025c-fe75-4723-8f72-7ef9d4414827","Type":"ContainerStarted","Data":"ddaa9d64b2cecf06f767be45059baa6a1da99cb84702d21cf41a8cfff3cea7c3"} Jan 23 18:14:49 crc kubenswrapper[4760]: I0123 18:14:49.765193 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" event={"ID":"5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea","Type":"ContainerStarted","Data":"d349623b9668c467ad5beba83e8de8201853928541fedd72a96dc7bf93ff2379"} Jan 23 18:14:50 crc kubenswrapper[4760]: I0123 18:14:50.774581 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d7f6cdcd7-qrn4c" event={"ID":"548df0df-23bd-4130-b95e-5fb32a95b750","Type":"ContainerStarted","Data":"f33dd1db537beee418302974f514155dbf9beb5e52c3e38412d10f747f9828d5"} Jan 23 18:14:50 crc kubenswrapper[4760]: I0123 18:14:50.814730 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6d7f6cdcd7-qrn4c" podStartSLOduration=2.814675071 podStartE2EDuration="2.814675071s" podCreationTimestamp="2026-01-23 18:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:14:50.803358193 +0000 UTC m=+833.805816156" watchObservedRunningTime="2026-01-23 18:14:50.814675071 +0000 UTC m=+833.817133034" Jan 23 18:14:54 crc kubenswrapper[4760]: I0123 18:14:54.800063 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" event={"ID":"5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea","Type":"ContainerStarted","Data":"07bc106a9deed2d9515c25298f5339b570dd1c5d3ecf48e0feff29298860b169"} Jan 23 18:14:54 crc kubenswrapper[4760]: I0123 18:14:54.800707 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:14:54 crc kubenswrapper[4760]: I0123 18:14:54.803941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-j7k9r" event={"ID":"0627471a-680e-425a-a2de-e4e8d1b4e956","Type":"ContainerStarted","Data":"b9730c992b827b00274ea6e9af2697a37988cd7ee8a505ba83a625d6245d05ae"} Jan 23 18:14:54 crc kubenswrapper[4760]: I0123 18:14:54.804056 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:54 crc kubenswrapper[4760]: I0123 18:14:54.806016 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" event={"ID":"45b6025c-fe75-4723-8f72-7ef9d4414827","Type":"ContainerStarted","Data":"7c31e40382f90a69f1ecf38641893ee3184b02daf22546d32fbb2cd1acaf29ad"} Jan 23 18:14:54 crc kubenswrapper[4760]: I0123 18:14:54.807568 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj872" event={"ID":"02fddd4e-8b61-4c07-b08d-f8ab8a2799ba","Type":"ContainerStarted","Data":"1f86ca4e12afc7898ad293ebca345e300c8be665b7dac638377b7eb9bbe1987a"} Jan 23 18:14:54 crc kubenswrapper[4760]: I0123 18:14:54.822313 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" podStartSLOduration=2.478475946 podStartE2EDuration="6.822288371s" podCreationTimestamp="2026-01-23 18:14:48 +0000 UTC" firstStartedPulling="2026-01-23 18:14:49.658307903 +0000 UTC m=+832.660765836" lastFinishedPulling="2026-01-23 18:14:54.002120328 +0000 UTC m=+837.004578261" observedRunningTime="2026-01-23 18:14:54.820769618 +0000 UTC m=+837.823227551" watchObservedRunningTime="2026-01-23 18:14:54.822288371 +0000 UTC m=+837.824746324" Jan 23 18:14:54 crc kubenswrapper[4760]: I0123 18:14:54.843553 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qlk28" podStartSLOduration=2.312411471 podStartE2EDuration="6.843529186s" podCreationTimestamp="2026-01-23 18:14:48 +0000 UTC" firstStartedPulling="2026-01-23 18:14:49.460281292 +0000 UTC m=+832.462739225" lastFinishedPulling="2026-01-23 18:14:53.991399007 +0000 UTC m=+836.993856940" observedRunningTime="2026-01-23 18:14:54.843110894 +0000 UTC m=+837.845568847" watchObservedRunningTime="2026-01-23 18:14:54.843529186 +0000 UTC m=+837.845987119" Jan 23 18:14:54 crc kubenswrapper[4760]: I0123 18:14:54.865420 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-j7k9r" podStartSLOduration=2.037911976 podStartE2EDuration="6.865384449s" podCreationTimestamp="2026-01-23 18:14:48 +0000 UTC" firstStartedPulling="2026-01-23 18:14:49.2041173 +0000 UTC m=+832.206575233" lastFinishedPulling="2026-01-23 18:14:54.031589773 +0000 UTC m=+837.034047706" observedRunningTime="2026-01-23 18:14:54.862537539 +0000 UTC m=+837.864995472" watchObservedRunningTime="2026-01-23 18:14:54.865384449 +0000 UTC m=+837.867842382" Jan 23 18:14:56 crc kubenswrapper[4760]: I0123 18:14:56.822457 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj872" event={"ID":"02fddd4e-8b61-4c07-b08d-f8ab8a2799ba","Type":"ContainerStarted","Data":"2f3a73a14f1dfac924530c589e7cc3a1fbdece78732202ef5ef20903d7815dcb"} Jan 23 18:14:56 crc kubenswrapper[4760]: I0123 18:14:56.848663 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj872" podStartSLOduration=1.753368399 podStartE2EDuration="8.848639147s" podCreationTimestamp="2026-01-23 18:14:48 +0000 UTC" firstStartedPulling="2026-01-23 18:14:49.090442174 +0000 UTC m=+832.092900107" lastFinishedPulling="2026-01-23 18:14:56.185712932 +0000 UTC m=+839.188170855" observedRunningTime="2026-01-23 18:14:56.840131459 +0000 UTC m=+839.842589402" watchObservedRunningTime="2026-01-23 18:14:56.848639147 +0000 UTC m=+839.851097110" Jan 23 18:14:59 crc kubenswrapper[4760]: I0123 18:14:59.210114 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-j7k9r" Jan 23 18:14:59 crc kubenswrapper[4760]: I0123 18:14:59.225710 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:59 crc kubenswrapper[4760]: I0123 18:14:59.225763 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:59 crc kubenswrapper[4760]: I0123 18:14:59.233145 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:59 crc kubenswrapper[4760]: I0123 18:14:59.845568 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6d7f6cdcd7-qrn4c" Jan 23 18:14:59 crc kubenswrapper[4760]: I0123 18:14:59.905275 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-h2x2h"] Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.139853 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd"] Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.140558 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.147469 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd"] Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.148959 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.154900 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.322624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0be58c19-65a9-47f0-b186-26ebc2e47ae7-secret-volume\") pod \"collect-profiles-29486535-gkjcd\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.322793 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0be58c19-65a9-47f0-b186-26ebc2e47ae7-config-volume\") pod \"collect-profiles-29486535-gkjcd\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.322849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr74r\" (UniqueName: \"kubernetes.io/projected/0be58c19-65a9-47f0-b186-26ebc2e47ae7-kube-api-access-wr74r\") pod \"collect-profiles-29486535-gkjcd\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.424269 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr74r\" (UniqueName: \"kubernetes.io/projected/0be58c19-65a9-47f0-b186-26ebc2e47ae7-kube-api-access-wr74r\") pod \"collect-profiles-29486535-gkjcd\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.424368 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0be58c19-65a9-47f0-b186-26ebc2e47ae7-secret-volume\") pod \"collect-profiles-29486535-gkjcd\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.424442 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0be58c19-65a9-47f0-b186-26ebc2e47ae7-config-volume\") pod \"collect-profiles-29486535-gkjcd\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.425254 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0be58c19-65a9-47f0-b186-26ebc2e47ae7-config-volume\") pod \"collect-profiles-29486535-gkjcd\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.431704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0be58c19-65a9-47f0-b186-26ebc2e47ae7-secret-volume\") pod \"collect-profiles-29486535-gkjcd\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.457965 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr74r\" (UniqueName: \"kubernetes.io/projected/0be58c19-65a9-47f0-b186-26ebc2e47ae7-kube-api-access-wr74r\") pod \"collect-profiles-29486535-gkjcd\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:00 crc kubenswrapper[4760]: I0123 18:15:00.756132 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:01 crc kubenswrapper[4760]: I0123 18:15:01.176652 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd"] Jan 23 18:15:01 crc kubenswrapper[4760]: W0123 18:15:01.179021 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0be58c19_65a9_47f0_b186_26ebc2e47ae7.slice/crio-38b27ef41ed9b77451939e3bef867e717d8deaf569efe4b3285c3116522cca3e WatchSource:0}: Error finding container 38b27ef41ed9b77451939e3bef867e717d8deaf569efe4b3285c3116522cca3e: Status 404 returned error can't find the container with id 38b27ef41ed9b77451939e3bef867e717d8deaf569efe4b3285c3116522cca3e Jan 23 18:15:01 crc kubenswrapper[4760]: I0123 18:15:01.855858 4760 generic.go:334] "Generic (PLEG): container finished" podID="0be58c19-65a9-47f0-b186-26ebc2e47ae7" containerID="55ecdea62c38ccbd165b87678715f072b312af424dff675cc873672dea7d5ad4" exitCode=0 Jan 23 18:15:01 crc kubenswrapper[4760]: I0123 18:15:01.855966 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" event={"ID":"0be58c19-65a9-47f0-b186-26ebc2e47ae7","Type":"ContainerDied","Data":"55ecdea62c38ccbd165b87678715f072b312af424dff675cc873672dea7d5ad4"} Jan 23 18:15:01 crc kubenswrapper[4760]: I0123 18:15:01.856237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" event={"ID":"0be58c19-65a9-47f0-b186-26ebc2e47ae7","Type":"ContainerStarted","Data":"38b27ef41ed9b77451939e3bef867e717d8deaf569efe4b3285c3116522cca3e"} Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.121904 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.262319 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0be58c19-65a9-47f0-b186-26ebc2e47ae7-secret-volume\") pod \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.262434 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr74r\" (UniqueName: \"kubernetes.io/projected/0be58c19-65a9-47f0-b186-26ebc2e47ae7-kube-api-access-wr74r\") pod \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.262494 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0be58c19-65a9-47f0-b186-26ebc2e47ae7-config-volume\") pod \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\" (UID: \"0be58c19-65a9-47f0-b186-26ebc2e47ae7\") " Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.263740 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be58c19-65a9-47f0-b186-26ebc2e47ae7-config-volume" (OuterVolumeSpecName: "config-volume") pod "0be58c19-65a9-47f0-b186-26ebc2e47ae7" (UID: "0be58c19-65a9-47f0-b186-26ebc2e47ae7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.267198 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be58c19-65a9-47f0-b186-26ebc2e47ae7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0be58c19-65a9-47f0-b186-26ebc2e47ae7" (UID: "0be58c19-65a9-47f0-b186-26ebc2e47ae7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.271352 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be58c19-65a9-47f0-b186-26ebc2e47ae7-kube-api-access-wr74r" (OuterVolumeSpecName: "kube-api-access-wr74r") pod "0be58c19-65a9-47f0-b186-26ebc2e47ae7" (UID: "0be58c19-65a9-47f0-b186-26ebc2e47ae7"). InnerVolumeSpecName "kube-api-access-wr74r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.363715 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0be58c19-65a9-47f0-b186-26ebc2e47ae7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.363755 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr74r\" (UniqueName: \"kubernetes.io/projected/0be58c19-65a9-47f0-b186-26ebc2e47ae7-kube-api-access-wr74r\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.363767 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0be58c19-65a9-47f0-b186-26ebc2e47ae7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.878948 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" event={"ID":"0be58c19-65a9-47f0-b186-26ebc2e47ae7","Type":"ContainerDied","Data":"38b27ef41ed9b77451939e3bef867e717d8deaf569efe4b3285c3116522cca3e"} Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.878994 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b27ef41ed9b77451939e3bef867e717d8deaf569efe4b3285c3116522cca3e" Jan 23 18:15:03 crc kubenswrapper[4760]: I0123 18:15:03.879108 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd" Jan 23 18:15:09 crc kubenswrapper[4760]: I0123 18:15:09.472237 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rbsvp" Jan 23 18:15:23 crc kubenswrapper[4760]: I0123 18:15:23.991067 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw"] Jan 23 18:15:23 crc kubenswrapper[4760]: E0123 18:15:23.991810 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be58c19-65a9-47f0-b186-26ebc2e47ae7" containerName="collect-profiles" Jan 23 18:15:23 crc kubenswrapper[4760]: I0123 18:15:23.991831 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be58c19-65a9-47f0-b186-26ebc2e47ae7" containerName="collect-profiles" Jan 23 18:15:23 crc kubenswrapper[4760]: I0123 18:15:23.991966 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be58c19-65a9-47f0-b186-26ebc2e47ae7" containerName="collect-profiles" Jan 23 18:15:23 crc kubenswrapper[4760]: I0123 18:15:23.993002 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:23 crc kubenswrapper[4760]: I0123 18:15:23.995038 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.006215 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw"] Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.141355 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffplg\" (UniqueName: \"kubernetes.io/projected/6af4de6d-bd37-47ca-95b2-bf48577ef81c-kube-api-access-ffplg\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.141473 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.141520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.242759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.242904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffplg\" (UniqueName: \"kubernetes.io/projected/6af4de6d-bd37-47ca-95b2-bf48577ef81c-kube-api-access-ffplg\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.242981 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.243347 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.244608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.267889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffplg\" (UniqueName: \"kubernetes.io/projected/6af4de6d-bd37-47ca-95b2-bf48577ef81c-kube-api-access-ffplg\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.310519 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.639997 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw"] Jan 23 18:15:24 crc kubenswrapper[4760]: I0123 18:15:24.949819 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-h2x2h" podUID="d3f94f74-4a2c-419a-b73f-c654dbf783b5" containerName="console" containerID="cri-o://7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb" gracePeriod=15 Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.015278 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" event={"ID":"6af4de6d-bd37-47ca-95b2-bf48577ef81c","Type":"ContainerStarted","Data":"e3ff5c5d29e4f6044839313e7ae88aa05d00d844e1fd51cdc1dbcf6ffbb4c86a"} Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.808486 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h2x2h_d3f94f74-4a2c-419a-b73f-c654dbf783b5/console/0.log" Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.808866 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.969091 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsn8n\" (UniqueName: \"kubernetes.io/projected/d3f94f74-4a2c-419a-b73f-c654dbf783b5-kube-api-access-nsn8n\") pod \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.969142 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-config\") pod \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.969232 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-trusted-ca-bundle\") pod \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.969263 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-serving-cert\") pod \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.969400 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-service-ca\") pod \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.969451 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-oauth-config\") pod \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.969477 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-oauth-serving-cert\") pod \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\" (UID: \"d3f94f74-4a2c-419a-b73f-c654dbf783b5\") " Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.970501 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-config" (OuterVolumeSpecName: "console-config") pod "d3f94f74-4a2c-419a-b73f-c654dbf783b5" (UID: "d3f94f74-4a2c-419a-b73f-c654dbf783b5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.970501 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-service-ca" (OuterVolumeSpecName: "service-ca") pod "d3f94f74-4a2c-419a-b73f-c654dbf783b5" (UID: "d3f94f74-4a2c-419a-b73f-c654dbf783b5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.970524 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d3f94f74-4a2c-419a-b73f-c654dbf783b5" (UID: "d3f94f74-4a2c-419a-b73f-c654dbf783b5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.970596 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d3f94f74-4a2c-419a-b73f-c654dbf783b5" (UID: "d3f94f74-4a2c-419a-b73f-c654dbf783b5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.975872 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d3f94f74-4a2c-419a-b73f-c654dbf783b5" (UID: "d3f94f74-4a2c-419a-b73f-c654dbf783b5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.975896 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f94f74-4a2c-419a-b73f-c654dbf783b5-kube-api-access-nsn8n" (OuterVolumeSpecName: "kube-api-access-nsn8n") pod "d3f94f74-4a2c-419a-b73f-c654dbf783b5" (UID: "d3f94f74-4a2c-419a-b73f-c654dbf783b5"). InnerVolumeSpecName "kube-api-access-nsn8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:15:25 crc kubenswrapper[4760]: I0123 18:15:25.976342 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d3f94f74-4a2c-419a-b73f-c654dbf783b5" (UID: "d3f94f74-4a2c-419a-b73f-c654dbf783b5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.022666 4760 generic.go:334] "Generic (PLEG): container finished" podID="6af4de6d-bd37-47ca-95b2-bf48577ef81c" containerID="27ff59ba7d7ef5c1444c6db70405cf4ae47036fcfaf238a4c24c2e5dd210a7bc" exitCode=0 Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.022776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" event={"ID":"6af4de6d-bd37-47ca-95b2-bf48577ef81c","Type":"ContainerDied","Data":"27ff59ba7d7ef5c1444c6db70405cf4ae47036fcfaf238a4c24c2e5dd210a7bc"} Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.026633 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h2x2h_d3f94f74-4a2c-419a-b73f-c654dbf783b5/console/0.log" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.026671 4760 generic.go:334] "Generic (PLEG): container finished" podID="d3f94f74-4a2c-419a-b73f-c654dbf783b5" containerID="7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb" exitCode=2 Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.026698 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h2x2h" event={"ID":"d3f94f74-4a2c-419a-b73f-c654dbf783b5","Type":"ContainerDied","Data":"7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb"} Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.026723 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h2x2h" event={"ID":"d3f94f74-4a2c-419a-b73f-c654dbf783b5","Type":"ContainerDied","Data":"a83acfc56c52605edad348df70d880e935604a4368c5a9993662eb5a036fa922"} Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.026740 4760 scope.go:117] "RemoveContainer" containerID="7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.026856 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h2x2h" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.055106 4760 scope.go:117] "RemoveContainer" containerID="7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb" Jan 23 18:15:26 crc kubenswrapper[4760]: E0123 18:15:26.056894 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb\": container with ID starting with 7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb not found: ID does not exist" containerID="7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.056957 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb"} err="failed to get container status \"7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb\": rpc error: code = NotFound desc = could not find container \"7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb\": container with ID starting with 7c20e54cce4a309ed5574ead92f4a73e8759472f39d2b9f3cee37d73637b5cdb not found: ID does not exist" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.068163 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-h2x2h"] Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.071517 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.071574 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.071595 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.071613 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsn8n\" (UniqueName: \"kubernetes.io/projected/d3f94f74-4a2c-419a-b73f-c654dbf783b5-kube-api-access-nsn8n\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.071629 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.071645 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3f94f74-4a2c-419a-b73f-c654dbf783b5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.071660 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f94f74-4a2c-419a-b73f-c654dbf783b5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:26 crc kubenswrapper[4760]: I0123 18:15:26.074940 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-h2x2h"] Jan 23 18:15:27 crc kubenswrapper[4760]: I0123 18:15:27.601053 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f94f74-4a2c-419a-b73f-c654dbf783b5" path="/var/lib/kubelet/pods/d3f94f74-4a2c-419a-b73f-c654dbf783b5/volumes" Jan 23 18:15:29 crc kubenswrapper[4760]: I0123 18:15:29.048993 4760 generic.go:334] "Generic (PLEG): container finished" podID="6af4de6d-bd37-47ca-95b2-bf48577ef81c" containerID="f9f0eba706afa7da543859d6212e1f16ebd3630ed8b1851f50864a4046704b88" exitCode=0 Jan 23 18:15:29 crc kubenswrapper[4760]: I0123 18:15:29.049072 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" event={"ID":"6af4de6d-bd37-47ca-95b2-bf48577ef81c","Type":"ContainerDied","Data":"f9f0eba706afa7da543859d6212e1f16ebd3630ed8b1851f50864a4046704b88"} Jan 23 18:15:30 crc kubenswrapper[4760]: I0123 18:15:30.058295 4760 generic.go:334] "Generic (PLEG): container finished" podID="6af4de6d-bd37-47ca-95b2-bf48577ef81c" containerID="eab1d8cc9503974d3c32c930be6321f3a9c64c1e54142cd3c3adb07663d9c46a" exitCode=0 Jan 23 18:15:30 crc kubenswrapper[4760]: I0123 18:15:30.058462 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" event={"ID":"6af4de6d-bd37-47ca-95b2-bf48577ef81c","Type":"ContainerDied","Data":"eab1d8cc9503974d3c32c930be6321f3a9c64c1e54142cd3c3adb07663d9c46a"} Jan 23 18:15:31 crc kubenswrapper[4760]: I0123 18:15:31.311192 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:31 crc kubenswrapper[4760]: I0123 18:15:31.441586 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-util\") pod \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " Jan 23 18:15:31 crc kubenswrapper[4760]: I0123 18:15:31.441631 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffplg\" (UniqueName: \"kubernetes.io/projected/6af4de6d-bd37-47ca-95b2-bf48577ef81c-kube-api-access-ffplg\") pod \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " Jan 23 18:15:31 crc kubenswrapper[4760]: I0123 18:15:31.441744 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-bundle\") pod \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\" (UID: \"6af4de6d-bd37-47ca-95b2-bf48577ef81c\") " Jan 23 18:15:31 crc kubenswrapper[4760]: I0123 18:15:31.443203 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-bundle" (OuterVolumeSpecName: "bundle") pod "6af4de6d-bd37-47ca-95b2-bf48577ef81c" (UID: "6af4de6d-bd37-47ca-95b2-bf48577ef81c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:15:31 crc kubenswrapper[4760]: I0123 18:15:31.447251 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af4de6d-bd37-47ca-95b2-bf48577ef81c-kube-api-access-ffplg" (OuterVolumeSpecName: "kube-api-access-ffplg") pod "6af4de6d-bd37-47ca-95b2-bf48577ef81c" (UID: "6af4de6d-bd37-47ca-95b2-bf48577ef81c"). InnerVolumeSpecName "kube-api-access-ffplg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:15:31 crc kubenswrapper[4760]: I0123 18:15:31.451627 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-util" (OuterVolumeSpecName: "util") pod "6af4de6d-bd37-47ca-95b2-bf48577ef81c" (UID: "6af4de6d-bd37-47ca-95b2-bf48577ef81c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:15:31 crc kubenswrapper[4760]: I0123 18:15:31.543375 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:31 crc kubenswrapper[4760]: I0123 18:15:31.543500 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6af4de6d-bd37-47ca-95b2-bf48577ef81c-util\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:31 crc kubenswrapper[4760]: I0123 18:15:31.543525 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffplg\" (UniqueName: \"kubernetes.io/projected/6af4de6d-bd37-47ca-95b2-bf48577ef81c-kube-api-access-ffplg\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:32 crc kubenswrapper[4760]: I0123 18:15:32.071983 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" event={"ID":"6af4de6d-bd37-47ca-95b2-bf48577ef81c","Type":"ContainerDied","Data":"e3ff5c5d29e4f6044839313e7ae88aa05d00d844e1fd51cdc1dbcf6ffbb4c86a"} Jan 23 18:15:32 crc kubenswrapper[4760]: I0123 18:15:32.072092 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3ff5c5d29e4f6044839313e7ae88aa05d00d844e1fd51cdc1dbcf6ffbb4c86a" Jan 23 18:15:32 crc kubenswrapper[4760]: I0123 18:15:32.072033 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.039016 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj"] Jan 23 18:15:43 crc kubenswrapper[4760]: E0123 18:15:43.039570 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af4de6d-bd37-47ca-95b2-bf48577ef81c" containerName="util" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.039581 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af4de6d-bd37-47ca-95b2-bf48577ef81c" containerName="util" Jan 23 18:15:43 crc kubenswrapper[4760]: E0123 18:15:43.039594 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af4de6d-bd37-47ca-95b2-bf48577ef81c" containerName="pull" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.039600 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af4de6d-bd37-47ca-95b2-bf48577ef81c" containerName="pull" Jan 23 18:15:43 crc kubenswrapper[4760]: E0123 18:15:43.039612 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af4de6d-bd37-47ca-95b2-bf48577ef81c" containerName="extract" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.039617 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af4de6d-bd37-47ca-95b2-bf48577ef81c" containerName="extract" Jan 23 18:15:43 crc kubenswrapper[4760]: E0123 18:15:43.039630 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f94f74-4a2c-419a-b73f-c654dbf783b5" containerName="console" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.039635 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f94f74-4a2c-419a-b73f-c654dbf783b5" containerName="console" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.039722 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af4de6d-bd37-47ca-95b2-bf48577ef81c" containerName="extract" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.039734 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f94f74-4a2c-419a-b73f-c654dbf783b5" containerName="console" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.040086 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.046325 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.046325 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.046361 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.046379 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nsnw7" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.046795 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.065696 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj"] Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.194947 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21e5a15c-db54-43f3-8dd9-834d4a327edd-webhook-cert\") pod \"metallb-operator-controller-manager-5cd7b57f9b-wsfcj\" (UID: \"21e5a15c-db54-43f3-8dd9-834d4a327edd\") " pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.194991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glhn8\" (UniqueName: \"kubernetes.io/projected/21e5a15c-db54-43f3-8dd9-834d4a327edd-kube-api-access-glhn8\") pod \"metallb-operator-controller-manager-5cd7b57f9b-wsfcj\" (UID: \"21e5a15c-db54-43f3-8dd9-834d4a327edd\") " pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.195017 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21e5a15c-db54-43f3-8dd9-834d4a327edd-apiservice-cert\") pod \"metallb-operator-controller-manager-5cd7b57f9b-wsfcj\" (UID: \"21e5a15c-db54-43f3-8dd9-834d4a327edd\") " pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.278402 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6"] Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.279870 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.292400 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.292840 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.293137 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-g2n7b" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.301227 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21e5a15c-db54-43f3-8dd9-834d4a327edd-webhook-cert\") pod \"metallb-operator-controller-manager-5cd7b57f9b-wsfcj\" (UID: \"21e5a15c-db54-43f3-8dd9-834d4a327edd\") " pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.301377 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glhn8\" (UniqueName: \"kubernetes.io/projected/21e5a15c-db54-43f3-8dd9-834d4a327edd-kube-api-access-glhn8\") pod \"metallb-operator-controller-manager-5cd7b57f9b-wsfcj\" (UID: \"21e5a15c-db54-43f3-8dd9-834d4a327edd\") " pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.301465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21e5a15c-db54-43f3-8dd9-834d4a327edd-apiservice-cert\") pod \"metallb-operator-controller-manager-5cd7b57f9b-wsfcj\" (UID: \"21e5a15c-db54-43f3-8dd9-834d4a327edd\") " pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.301564 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6"] Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.308512 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21e5a15c-db54-43f3-8dd9-834d4a327edd-webhook-cert\") pod \"metallb-operator-controller-manager-5cd7b57f9b-wsfcj\" (UID: \"21e5a15c-db54-43f3-8dd9-834d4a327edd\") " pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.314240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21e5a15c-db54-43f3-8dd9-834d4a327edd-apiservice-cert\") pod \"metallb-operator-controller-manager-5cd7b57f9b-wsfcj\" (UID: \"21e5a15c-db54-43f3-8dd9-834d4a327edd\") " pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.348906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glhn8\" (UniqueName: \"kubernetes.io/projected/21e5a15c-db54-43f3-8dd9-834d4a327edd-kube-api-access-glhn8\") pod \"metallb-operator-controller-manager-5cd7b57f9b-wsfcj\" (UID: \"21e5a15c-db54-43f3-8dd9-834d4a327edd\") " pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.356510 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.402358 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc02afcf-f520-4bdf-a8ae-52c2a9c1857e-webhook-cert\") pod \"metallb-operator-webhook-server-b8f8768df-khhb6\" (UID: \"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e\") " pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.402445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm2fc\" (UniqueName: \"kubernetes.io/projected/dc02afcf-f520-4bdf-a8ae-52c2a9c1857e-kube-api-access-hm2fc\") pod \"metallb-operator-webhook-server-b8f8768df-khhb6\" (UID: \"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e\") " pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.402697 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc02afcf-f520-4bdf-a8ae-52c2a9c1857e-apiservice-cert\") pod \"metallb-operator-webhook-server-b8f8768df-khhb6\" (UID: \"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e\") " pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.507731 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc02afcf-f520-4bdf-a8ae-52c2a9c1857e-webhook-cert\") pod \"metallb-operator-webhook-server-b8f8768df-khhb6\" (UID: \"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e\") " pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.508103 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm2fc\" (UniqueName: \"kubernetes.io/projected/dc02afcf-f520-4bdf-a8ae-52c2a9c1857e-kube-api-access-hm2fc\") pod \"metallb-operator-webhook-server-b8f8768df-khhb6\" (UID: \"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e\") " pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.508378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc02afcf-f520-4bdf-a8ae-52c2a9c1857e-apiservice-cert\") pod \"metallb-operator-webhook-server-b8f8768df-khhb6\" (UID: \"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e\") " pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.512197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc02afcf-f520-4bdf-a8ae-52c2a9c1857e-apiservice-cert\") pod \"metallb-operator-webhook-server-b8f8768df-khhb6\" (UID: \"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e\") " pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.518730 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc02afcf-f520-4bdf-a8ae-52c2a9c1857e-webhook-cert\") pod \"metallb-operator-webhook-server-b8f8768df-khhb6\" (UID: \"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e\") " pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.525651 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm2fc\" (UniqueName: \"kubernetes.io/projected/dc02afcf-f520-4bdf-a8ae-52c2a9c1857e-kube-api-access-hm2fc\") pod \"metallb-operator-webhook-server-b8f8768df-khhb6\" (UID: \"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e\") " pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.583318 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj"] Jan 23 18:15:43 crc kubenswrapper[4760]: W0123 18:15:43.591259 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21e5a15c_db54_43f3_8dd9_834d4a327edd.slice/crio-80ed1ba6d2c44bccc2d51b81c7917250b1978d5afff9ac1d41b65c43688fedfd WatchSource:0}: Error finding container 80ed1ba6d2c44bccc2d51b81c7917250b1978d5afff9ac1d41b65c43688fedfd: Status 404 returned error can't find the container with id 80ed1ba6d2c44bccc2d51b81c7917250b1978d5afff9ac1d41b65c43688fedfd Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.597606 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:43 crc kubenswrapper[4760]: I0123 18:15:43.752684 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6"] Jan 23 18:15:43 crc kubenswrapper[4760]: W0123 18:15:43.759343 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc02afcf_f520_4bdf_a8ae_52c2a9c1857e.slice/crio-a99e82f637d8524f316423c94d65054e56446e5c79c060aa913b4747f26d9d87 WatchSource:0}: Error finding container a99e82f637d8524f316423c94d65054e56446e5c79c060aa913b4747f26d9d87: Status 404 returned error can't find the container with id a99e82f637d8524f316423c94d65054e56446e5c79c060aa913b4747f26d9d87 Jan 23 18:15:44 crc kubenswrapper[4760]: I0123 18:15:44.132962 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" event={"ID":"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e","Type":"ContainerStarted","Data":"a99e82f637d8524f316423c94d65054e56446e5c79c060aa913b4747f26d9d87"} Jan 23 18:15:44 crc kubenswrapper[4760]: I0123 18:15:44.134024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" event={"ID":"21e5a15c-db54-43f3-8dd9-834d4a327edd","Type":"ContainerStarted","Data":"80ed1ba6d2c44bccc2d51b81c7917250b1978d5afff9ac1d41b65c43688fedfd"} Jan 23 18:15:49 crc kubenswrapper[4760]: I0123 18:15:49.163780 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" event={"ID":"21e5a15c-db54-43f3-8dd9-834d4a327edd","Type":"ContainerStarted","Data":"5f1adf78434453a5e6d15edce15424f2b8b982cdb6420e0cc96c184fd7613dce"} Jan 23 18:15:49 crc kubenswrapper[4760]: I0123 18:15:49.164556 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:15:49 crc kubenswrapper[4760]: I0123 18:15:49.165249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" event={"ID":"dc02afcf-f520-4bdf-a8ae-52c2a9c1857e","Type":"ContainerStarted","Data":"22d0acd5f49bb8e7d6bc9ed3f82394e74e6c03899681e9cb4b1877a692f9c4e0"} Jan 23 18:15:49 crc kubenswrapper[4760]: I0123 18:15:49.165463 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:15:49 crc kubenswrapper[4760]: I0123 18:15:49.181749 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" podStartSLOduration=1.189230871 podStartE2EDuration="6.181732366s" podCreationTimestamp="2026-01-23 18:15:43 +0000 UTC" firstStartedPulling="2026-01-23 18:15:43.593111997 +0000 UTC m=+886.595569930" lastFinishedPulling="2026-01-23 18:15:48.585613452 +0000 UTC m=+891.588071425" observedRunningTime="2026-01-23 18:15:49.17832039 +0000 UTC m=+892.180778323" watchObservedRunningTime="2026-01-23 18:15:49.181732366 +0000 UTC m=+892.184190299" Jan 23 18:15:49 crc kubenswrapper[4760]: I0123 18:15:49.204346 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" podStartSLOduration=1.366840043 podStartE2EDuration="6.204330136s" podCreationTimestamp="2026-01-23 18:15:43 +0000 UTC" firstStartedPulling="2026-01-23 18:15:43.765274866 +0000 UTC m=+886.767732799" lastFinishedPulling="2026-01-23 18:15:48.602764949 +0000 UTC m=+891.605222892" observedRunningTime="2026-01-23 18:15:49.20127149 +0000 UTC m=+892.203729423" watchObservedRunningTime="2026-01-23 18:15:49.204330136 +0000 UTC m=+892.206788069" Jan 23 18:16:03 crc kubenswrapper[4760]: I0123 18:16:03.603072 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-b8f8768df-khhb6" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.001322 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q9pdt"] Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.003683 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.014911 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q9pdt"] Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.041357 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-catalog-content\") pod \"community-operators-q9pdt\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.041631 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrs8\" (UniqueName: \"kubernetes.io/projected/607f6948-02d1-4ed2-a010-4ff8d4d74cab-kube-api-access-ptrs8\") pod \"community-operators-q9pdt\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.041653 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-utilities\") pod \"community-operators-q9pdt\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.143101 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-catalog-content\") pod \"community-operators-q9pdt\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.143157 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrs8\" (UniqueName: \"kubernetes.io/projected/607f6948-02d1-4ed2-a010-4ff8d4d74cab-kube-api-access-ptrs8\") pod \"community-operators-q9pdt\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.143191 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-utilities\") pod \"community-operators-q9pdt\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.143606 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-catalog-content\") pod \"community-operators-q9pdt\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.143722 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-utilities\") pod \"community-operators-q9pdt\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.167808 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrs8\" (UniqueName: \"kubernetes.io/projected/607f6948-02d1-4ed2-a010-4ff8d4d74cab-kube-api-access-ptrs8\") pod \"community-operators-q9pdt\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.330722 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:08 crc kubenswrapper[4760]: I0123 18:16:08.809020 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q9pdt"] Jan 23 18:16:09 crc kubenswrapper[4760]: I0123 18:16:09.291259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9pdt" event={"ID":"607f6948-02d1-4ed2-a010-4ff8d4d74cab","Type":"ContainerStarted","Data":"650dfeb5c5d7b3f332cc786357808e5ee727432e53ced868458a91652fbbbedc"} Jan 23 18:16:11 crc kubenswrapper[4760]: I0123 18:16:11.304015 4760 generic.go:334] "Generic (PLEG): container finished" podID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerID="673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319" exitCode=0 Jan 23 18:16:11 crc kubenswrapper[4760]: I0123 18:16:11.304078 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9pdt" event={"ID":"607f6948-02d1-4ed2-a010-4ff8d4d74cab","Type":"ContainerDied","Data":"673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319"} Jan 23 18:16:13 crc kubenswrapper[4760]: I0123 18:16:13.318614 4760 generic.go:334] "Generic (PLEG): container finished" podID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerID="377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac" exitCode=0 Jan 23 18:16:13 crc kubenswrapper[4760]: I0123 18:16:13.318662 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9pdt" event={"ID":"607f6948-02d1-4ed2-a010-4ff8d4d74cab","Type":"ContainerDied","Data":"377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac"} Jan 23 18:16:14 crc kubenswrapper[4760]: I0123 18:16:14.326739 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9pdt" event={"ID":"607f6948-02d1-4ed2-a010-4ff8d4d74cab","Type":"ContainerStarted","Data":"d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a"} Jan 23 18:16:14 crc kubenswrapper[4760]: I0123 18:16:14.345741 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q9pdt" podStartSLOduration=4.93239419 podStartE2EDuration="7.345725623s" podCreationTimestamp="2026-01-23 18:16:07 +0000 UTC" firstStartedPulling="2026-01-23 18:16:11.305793762 +0000 UTC m=+914.308251735" lastFinishedPulling="2026-01-23 18:16:13.719125245 +0000 UTC m=+916.721583168" observedRunningTime="2026-01-23 18:16:14.342908783 +0000 UTC m=+917.345366736" watchObservedRunningTime="2026-01-23 18:16:14.345725623 +0000 UTC m=+917.348183546" Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.805533 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wtpn4"] Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.806830 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.821630 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtpn4"] Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.895175 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-utilities\") pod \"redhat-marketplace-wtpn4\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.895562 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pkvj\" (UniqueName: \"kubernetes.io/projected/337eff77-ea95-40d0-8a93-32f0b4c5b840-kube-api-access-6pkvj\") pod \"redhat-marketplace-wtpn4\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.895690 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-catalog-content\") pod \"redhat-marketplace-wtpn4\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.997192 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-catalog-content\") pod \"redhat-marketplace-wtpn4\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.997253 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-utilities\") pod \"redhat-marketplace-wtpn4\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.997348 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pkvj\" (UniqueName: \"kubernetes.io/projected/337eff77-ea95-40d0-8a93-32f0b4c5b840-kube-api-access-6pkvj\") pod \"redhat-marketplace-wtpn4\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.997809 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-catalog-content\") pod \"redhat-marketplace-wtpn4\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:16 crc kubenswrapper[4760]: I0123 18:16:16.997865 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-utilities\") pod \"redhat-marketplace-wtpn4\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:17 crc kubenswrapper[4760]: I0123 18:16:17.017507 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pkvj\" (UniqueName: \"kubernetes.io/projected/337eff77-ea95-40d0-8a93-32f0b4c5b840-kube-api-access-6pkvj\") pod \"redhat-marketplace-wtpn4\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:17 crc kubenswrapper[4760]: I0123 18:16:17.353853 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:17 crc kubenswrapper[4760]: I0123 18:16:17.782103 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtpn4"] Jan 23 18:16:17 crc kubenswrapper[4760]: W0123 18:16:17.792164 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod337eff77_ea95_40d0_8a93_32f0b4c5b840.slice/crio-c2ddf20499e32aa1b103c2867764d1ce65483a4b27f74b584b56082ee07e1b77 WatchSource:0}: Error finding container c2ddf20499e32aa1b103c2867764d1ce65483a4b27f74b584b56082ee07e1b77: Status 404 returned error can't find the container with id c2ddf20499e32aa1b103c2867764d1ce65483a4b27f74b584b56082ee07e1b77 Jan 23 18:16:18 crc kubenswrapper[4760]: I0123 18:16:18.331754 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:18 crc kubenswrapper[4760]: I0123 18:16:18.332036 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:18 crc kubenswrapper[4760]: I0123 18:16:18.368890 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtpn4" event={"ID":"337eff77-ea95-40d0-8a93-32f0b4c5b840","Type":"ContainerStarted","Data":"d50b85fbcbb98b41a0e44019d12247cefbaf6ef0ed268f5f9d93605334809ee3"} Jan 23 18:16:18 crc kubenswrapper[4760]: I0123 18:16:18.368931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtpn4" event={"ID":"337eff77-ea95-40d0-8a93-32f0b4c5b840","Type":"ContainerStarted","Data":"c2ddf20499e32aa1b103c2867764d1ce65483a4b27f74b584b56082ee07e1b77"} Jan 23 18:16:18 crc kubenswrapper[4760]: I0123 18:16:18.374179 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:18 crc kubenswrapper[4760]: I0123 18:16:18.429620 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:19 crc kubenswrapper[4760]: I0123 18:16:19.377211 4760 generic.go:334] "Generic (PLEG): container finished" podID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerID="d50b85fbcbb98b41a0e44019d12247cefbaf6ef0ed268f5f9d93605334809ee3" exitCode=0 Jan 23 18:16:19 crc kubenswrapper[4760]: I0123 18:16:19.377274 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtpn4" event={"ID":"337eff77-ea95-40d0-8a93-32f0b4c5b840","Type":"ContainerDied","Data":"d50b85fbcbb98b41a0e44019d12247cefbaf6ef0ed268f5f9d93605334809ee3"} Jan 23 18:16:20 crc kubenswrapper[4760]: I0123 18:16:20.385568 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtpn4" event={"ID":"337eff77-ea95-40d0-8a93-32f0b4c5b840","Type":"ContainerStarted","Data":"2dd90b3f43b3aa2902e5285b9e2959a4ec18859514af497d7cef49769238f6ce"} Jan 23 18:16:20 crc kubenswrapper[4760]: I0123 18:16:20.748657 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q9pdt"] Jan 23 18:16:20 crc kubenswrapper[4760]: I0123 18:16:20.749053 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q9pdt" podUID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerName="registry-server" containerID="cri-o://d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a" gracePeriod=2 Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.121157 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.308075 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptrs8\" (UniqueName: \"kubernetes.io/projected/607f6948-02d1-4ed2-a010-4ff8d4d74cab-kube-api-access-ptrs8\") pod \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.308171 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-utilities\") pod \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.308255 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-catalog-content\") pod \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\" (UID: \"607f6948-02d1-4ed2-a010-4ff8d4d74cab\") " Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.312802 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-utilities" (OuterVolumeSpecName: "utilities") pod "607f6948-02d1-4ed2-a010-4ff8d4d74cab" (UID: "607f6948-02d1-4ed2-a010-4ff8d4d74cab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.316373 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607f6948-02d1-4ed2-a010-4ff8d4d74cab-kube-api-access-ptrs8" (OuterVolumeSpecName: "kube-api-access-ptrs8") pod "607f6948-02d1-4ed2-a010-4ff8d4d74cab" (UID: "607f6948-02d1-4ed2-a010-4ff8d4d74cab"). InnerVolumeSpecName "kube-api-access-ptrs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.365572 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "607f6948-02d1-4ed2-a010-4ff8d4d74cab" (UID: "607f6948-02d1-4ed2-a010-4ff8d4d74cab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.393980 4760 generic.go:334] "Generic (PLEG): container finished" podID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerID="d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a" exitCode=0 Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.394041 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9pdt" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.394075 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9pdt" event={"ID":"607f6948-02d1-4ed2-a010-4ff8d4d74cab","Type":"ContainerDied","Data":"d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a"} Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.394114 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9pdt" event={"ID":"607f6948-02d1-4ed2-a010-4ff8d4d74cab","Type":"ContainerDied","Data":"650dfeb5c5d7b3f332cc786357808e5ee727432e53ced868458a91652fbbbedc"} Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.394137 4760 scope.go:117] "RemoveContainer" containerID="d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.397112 4760 generic.go:334] "Generic (PLEG): container finished" podID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerID="2dd90b3f43b3aa2902e5285b9e2959a4ec18859514af497d7cef49769238f6ce" exitCode=0 Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.397167 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtpn4" event={"ID":"337eff77-ea95-40d0-8a93-32f0b4c5b840","Type":"ContainerDied","Data":"2dd90b3f43b3aa2902e5285b9e2959a4ec18859514af497d7cef49769238f6ce"} Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.409842 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.409886 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptrs8\" (UniqueName: \"kubernetes.io/projected/607f6948-02d1-4ed2-a010-4ff8d4d74cab-kube-api-access-ptrs8\") on node \"crc\" DevicePath \"\"" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.409903 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607f6948-02d1-4ed2-a010-4ff8d4d74cab-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.415760 4760 scope.go:117] "RemoveContainer" containerID="377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.436140 4760 scope.go:117] "RemoveContainer" containerID="673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.446647 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q9pdt"] Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.450399 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q9pdt"] Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.470376 4760 scope.go:117] "RemoveContainer" containerID="d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a" Jan 23 18:16:21 crc kubenswrapper[4760]: E0123 18:16:21.470999 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a\": container with ID starting with d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a not found: ID does not exist" containerID="d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.471046 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a"} err="failed to get container status \"d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a\": rpc error: code = NotFound desc = could not find container \"d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a\": container with ID starting with d21ce085d1f7352fe0d9b34f06a5890173def400e5d9c9b37294722b154c6d2a not found: ID does not exist" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.471081 4760 scope.go:117] "RemoveContainer" containerID="377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac" Jan 23 18:16:21 crc kubenswrapper[4760]: E0123 18:16:21.472687 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac\": container with ID starting with 377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac not found: ID does not exist" containerID="377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.472738 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac"} err="failed to get container status \"377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac\": rpc error: code = NotFound desc = could not find container \"377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac\": container with ID starting with 377603ab922c469447bebd4dd3c81c6c7ff393e7d3e90b15e1561b885ce125ac not found: ID does not exist" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.472761 4760 scope.go:117] "RemoveContainer" containerID="673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319" Jan 23 18:16:21 crc kubenswrapper[4760]: E0123 18:16:21.473170 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319\": container with ID starting with 673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319 not found: ID does not exist" containerID="673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.473226 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319"} err="failed to get container status \"673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319\": rpc error: code = NotFound desc = could not find container \"673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319\": container with ID starting with 673b5b3df95bd3738456482280af7bee6176c5215a2e92f7523b793f7e158319 not found: ID does not exist" Jan 23 18:16:21 crc kubenswrapper[4760]: I0123 18:16:21.605304 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" path="/var/lib/kubelet/pods/607f6948-02d1-4ed2-a010-4ff8d4d74cab/volumes" Jan 23 18:16:22 crc kubenswrapper[4760]: I0123 18:16:22.405884 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtpn4" event={"ID":"337eff77-ea95-40d0-8a93-32f0b4c5b840","Type":"ContainerStarted","Data":"0dd1e1aac84d4380e80ecc0088fce31b17be7b95f73d8b8c88a71ba79c408b10"} Jan 23 18:16:22 crc kubenswrapper[4760]: I0123 18:16:22.430470 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wtpn4" podStartSLOduration=3.641272577 podStartE2EDuration="6.430456391s" podCreationTimestamp="2026-01-23 18:16:16 +0000 UTC" firstStartedPulling="2026-01-23 18:16:19.378980563 +0000 UTC m=+922.381438496" lastFinishedPulling="2026-01-23 18:16:22.168164377 +0000 UTC m=+925.170622310" observedRunningTime="2026-01-23 18:16:22.427072235 +0000 UTC m=+925.429530178" watchObservedRunningTime="2026-01-23 18:16:22.430456391 +0000 UTC m=+925.432914324" Jan 23 18:16:23 crc kubenswrapper[4760]: I0123 18:16:23.359648 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5cd7b57f9b-wsfcj" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.166078 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt"] Jan 23 18:16:24 crc kubenswrapper[4760]: E0123 18:16:24.166700 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerName="extract-content" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.166716 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerName="extract-content" Jan 23 18:16:24 crc kubenswrapper[4760]: E0123 18:16:24.166729 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerName="registry-server" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.166737 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerName="registry-server" Jan 23 18:16:24 crc kubenswrapper[4760]: E0123 18:16:24.166757 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerName="extract-utilities" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.166765 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerName="extract-utilities" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.166899 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="607f6948-02d1-4ed2-a010-4ff8d4d74cab" containerName="registry-server" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.167373 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.170510 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.170594 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-jgxlt" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.171011 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cxwr6"] Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.173506 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.175923 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.178194 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt"] Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.182855 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.247041 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-vcqvw"] Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.247958 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.251103 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.251172 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.251228 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kpplb" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.251612 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.265472 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-5pm7w"] Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.266268 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.267844 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.301937 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-5pm7w"] Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.346586 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-frr-conf\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.346635 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c57f00f5-beec-44ff-b9cc-83ed33ddc502-metallb-excludel2\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.346664 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-frr-sockets\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.346818 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-memberlist\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.346912 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e951e064-8240-463f-b524-295257f45405-metrics-certs\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.346975 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-metrics\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.347000 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqpwj\" (UniqueName: \"kubernetes.io/projected/e951e064-8240-463f-b524-295257f45405-kube-api-access-bqpwj\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.347025 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdfea4ed-b515-4324-a846-11743d9ae4ab-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-lzmgt\" (UID: \"bdfea4ed-b515-4324-a846-11743d9ae4ab\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.347046 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-reloader\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.347067 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcnbp\" (UniqueName: \"kubernetes.io/projected/bdfea4ed-b515-4324-a846-11743d9ae4ab-kube-api-access-mcnbp\") pod \"frr-k8s-webhook-server-7df86c4f6c-lzmgt\" (UID: \"bdfea4ed-b515-4324-a846-11743d9ae4ab\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.347094 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29bsn\" (UniqueName: \"kubernetes.io/projected/c57f00f5-beec-44ff-b9cc-83ed33ddc502-kube-api-access-29bsn\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.347137 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e951e064-8240-463f-b524-295257f45405-frr-startup\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.347186 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-metrics-certs\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.448624 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-frr-sockets\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.448672 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-memberlist\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.448707 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e951e064-8240-463f-b524-295257f45405-metrics-certs\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.448727 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-metrics\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.448744 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqpwj\" (UniqueName: \"kubernetes.io/projected/e951e064-8240-463f-b524-295257f45405-kube-api-access-bqpwj\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: E0123 18:16:24.448855 4760 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 18:16:24 crc kubenswrapper[4760]: E0123 18:16:24.448929 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-memberlist podName:c57f00f5-beec-44ff-b9cc-83ed33ddc502 nodeName:}" failed. No retries permitted until 2026-01-23 18:16:24.948911163 +0000 UTC m=+927.951369096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-memberlist") pod "speaker-vcqvw" (UID: "c57f00f5-beec-44ff-b9cc-83ed33ddc502") : secret "metallb-memberlist" not found Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.448762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdfea4ed-b515-4324-a846-11743d9ae4ab-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-lzmgt\" (UID: \"bdfea4ed-b515-4324-a846-11743d9ae4ab\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.449120 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-reloader\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.449528 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-metrics\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.449531 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-frr-sockets\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.449535 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-reloader\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.449869 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d77acbc5-a14f-4002-ac5d-f6c90f44faf6-cert\") pod \"controller-6968d8fdc4-5pm7w\" (UID: \"d77acbc5-a14f-4002-ac5d-f6c90f44faf6\") " pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.449906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d77acbc5-a14f-4002-ac5d-f6c90f44faf6-metrics-certs\") pod \"controller-6968d8fdc4-5pm7w\" (UID: \"d77acbc5-a14f-4002-ac5d-f6c90f44faf6\") " pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.449930 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcnbp\" (UniqueName: \"kubernetes.io/projected/bdfea4ed-b515-4324-a846-11743d9ae4ab-kube-api-access-mcnbp\") pod \"frr-k8s-webhook-server-7df86c4f6c-lzmgt\" (UID: \"bdfea4ed-b515-4324-a846-11743d9ae4ab\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.449962 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29bsn\" (UniqueName: \"kubernetes.io/projected/c57f00f5-beec-44ff-b9cc-83ed33ddc502-kube-api-access-29bsn\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.449996 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e951e064-8240-463f-b524-295257f45405-frr-startup\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.450015 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k42hn\" (UniqueName: \"kubernetes.io/projected/d77acbc5-a14f-4002-ac5d-f6c90f44faf6-kube-api-access-k42hn\") pod \"controller-6968d8fdc4-5pm7w\" (UID: \"d77acbc5-a14f-4002-ac5d-f6c90f44faf6\") " pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.450030 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-metrics-certs\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.450059 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-frr-conf\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.450077 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c57f00f5-beec-44ff-b9cc-83ed33ddc502-metallb-excludel2\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: E0123 18:16:24.450352 4760 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 23 18:16:24 crc kubenswrapper[4760]: E0123 18:16:24.450423 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-metrics-certs podName:c57f00f5-beec-44ff-b9cc-83ed33ddc502 nodeName:}" failed. No retries permitted until 2026-01-23 18:16:24.950387645 +0000 UTC m=+927.952845578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-metrics-certs") pod "speaker-vcqvw" (UID: "c57f00f5-beec-44ff-b9cc-83ed33ddc502") : secret "speaker-certs-secret" not found Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.450665 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c57f00f5-beec-44ff-b9cc-83ed33ddc502-metallb-excludel2\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.450686 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e951e064-8240-463f-b524-295257f45405-frr-conf\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.450819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e951e064-8240-463f-b524-295257f45405-frr-startup\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.453728 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e951e064-8240-463f-b524-295257f45405-metrics-certs\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.453924 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdfea4ed-b515-4324-a846-11743d9ae4ab-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-lzmgt\" (UID: \"bdfea4ed-b515-4324-a846-11743d9ae4ab\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.477066 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqpwj\" (UniqueName: \"kubernetes.io/projected/e951e064-8240-463f-b524-295257f45405-kube-api-access-bqpwj\") pod \"frr-k8s-cxwr6\" (UID: \"e951e064-8240-463f-b524-295257f45405\") " pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.477889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcnbp\" (UniqueName: \"kubernetes.io/projected/bdfea4ed-b515-4324-a846-11743d9ae4ab-kube-api-access-mcnbp\") pod \"frr-k8s-webhook-server-7df86c4f6c-lzmgt\" (UID: \"bdfea4ed-b515-4324-a846-11743d9ae4ab\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.483549 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.492013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29bsn\" (UniqueName: \"kubernetes.io/projected/c57f00f5-beec-44ff-b9cc-83ed33ddc502-kube-api-access-29bsn\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.492295 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.551426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d77acbc5-a14f-4002-ac5d-f6c90f44faf6-cert\") pod \"controller-6968d8fdc4-5pm7w\" (UID: \"d77acbc5-a14f-4002-ac5d-f6c90f44faf6\") " pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.551469 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d77acbc5-a14f-4002-ac5d-f6c90f44faf6-metrics-certs\") pod \"controller-6968d8fdc4-5pm7w\" (UID: \"d77acbc5-a14f-4002-ac5d-f6c90f44faf6\") " pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.551511 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k42hn\" (UniqueName: \"kubernetes.io/projected/d77acbc5-a14f-4002-ac5d-f6c90f44faf6-kube-api-access-k42hn\") pod \"controller-6968d8fdc4-5pm7w\" (UID: \"d77acbc5-a14f-4002-ac5d-f6c90f44faf6\") " pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.556803 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.557202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d77acbc5-a14f-4002-ac5d-f6c90f44faf6-metrics-certs\") pod \"controller-6968d8fdc4-5pm7w\" (UID: \"d77acbc5-a14f-4002-ac5d-f6c90f44faf6\") " pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.574292 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d77acbc5-a14f-4002-ac5d-f6c90f44faf6-cert\") pod \"controller-6968d8fdc4-5pm7w\" (UID: \"d77acbc5-a14f-4002-ac5d-f6c90f44faf6\") " pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.577292 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k42hn\" (UniqueName: \"kubernetes.io/projected/d77acbc5-a14f-4002-ac5d-f6c90f44faf6-kube-api-access-k42hn\") pod \"controller-6968d8fdc4-5pm7w\" (UID: \"d77acbc5-a14f-4002-ac5d-f6c90f44faf6\") " pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.579733 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.723471 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt"] Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.788136 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-5pm7w"] Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.958259 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-metrics-certs\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.958704 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-memberlist\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:24 crc kubenswrapper[4760]: E0123 18:16:24.958976 4760 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 18:16:24 crc kubenswrapper[4760]: E0123 18:16:24.959093 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-memberlist podName:c57f00f5-beec-44ff-b9cc-83ed33ddc502 nodeName:}" failed. No retries permitted until 2026-01-23 18:16:25.9590652 +0000 UTC m=+928.961523173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-memberlist") pod "speaker-vcqvw" (UID: "c57f00f5-beec-44ff-b9cc-83ed33ddc502") : secret "metallb-memberlist" not found Jan 23 18:16:24 crc kubenswrapper[4760]: I0123 18:16:24.964338 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-metrics-certs\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:25 crc kubenswrapper[4760]: I0123 18:16:25.427095 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" event={"ID":"bdfea4ed-b515-4324-a846-11743d9ae4ab","Type":"ContainerStarted","Data":"78bcd69aa2e3fa30bcabda41ebd0d593593bde4371c91ca52ea23e1ddb10faf7"} Jan 23 18:16:25 crc kubenswrapper[4760]: I0123 18:16:25.428385 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cxwr6" event={"ID":"e951e064-8240-463f-b524-295257f45405","Type":"ContainerStarted","Data":"3ce523803d25a7af3378315d2f782b078e77ae42a7f1310014dc8167ba8406f0"} Jan 23 18:16:25 crc kubenswrapper[4760]: I0123 18:16:25.429997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5pm7w" event={"ID":"d77acbc5-a14f-4002-ac5d-f6c90f44faf6","Type":"ContainerStarted","Data":"63060252899b62dd8eb5acffbe818d8542a85992cc9c041ca48012165a637ef7"} Jan 23 18:16:25 crc kubenswrapper[4760]: I0123 18:16:25.430045 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5pm7w" event={"ID":"d77acbc5-a14f-4002-ac5d-f6c90f44faf6","Type":"ContainerStarted","Data":"5a3db9706b10dc034224f6168f96345622a1dea9365a37db8f6302d63c9352c1"} Jan 23 18:16:25 crc kubenswrapper[4760]: I0123 18:16:25.430072 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5pm7w" event={"ID":"d77acbc5-a14f-4002-ac5d-f6c90f44faf6","Type":"ContainerStarted","Data":"65b968000bb9d8e4aa1d8dfe3a1131477e8ea629e0ea5b51f0381569f8d8e071"} Jan 23 18:16:25 crc kubenswrapper[4760]: I0123 18:16:25.430205 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:25 crc kubenswrapper[4760]: I0123 18:16:25.466978 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-5pm7w" podStartSLOduration=1.466947714 podStartE2EDuration="1.466947714s" podCreationTimestamp="2026-01-23 18:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:16:25.458360471 +0000 UTC m=+928.460818434" watchObservedRunningTime="2026-01-23 18:16:25.466947714 +0000 UTC m=+928.469405687" Jan 23 18:16:25 crc kubenswrapper[4760]: I0123 18:16:25.971022 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-memberlist\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:25 crc kubenswrapper[4760]: I0123 18:16:25.975268 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c57f00f5-beec-44ff-b9cc-83ed33ddc502-memberlist\") pod \"speaker-vcqvw\" (UID: \"c57f00f5-beec-44ff-b9cc-83ed33ddc502\") " pod="metallb-system/speaker-vcqvw" Jan 23 18:16:26 crc kubenswrapper[4760]: I0123 18:16:26.062781 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vcqvw" Jan 23 18:16:26 crc kubenswrapper[4760]: W0123 18:16:26.091316 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc57f00f5_beec_44ff_b9cc_83ed33ddc502.slice/crio-ca242ed75c919f6859b41a6682aed294e40f4f6a73fa899b6ee5657be05142fa WatchSource:0}: Error finding container ca242ed75c919f6859b41a6682aed294e40f4f6a73fa899b6ee5657be05142fa: Status 404 returned error can't find the container with id ca242ed75c919f6859b41a6682aed294e40f4f6a73fa899b6ee5657be05142fa Jan 23 18:16:26 crc kubenswrapper[4760]: I0123 18:16:26.442547 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vcqvw" event={"ID":"c57f00f5-beec-44ff-b9cc-83ed33ddc502","Type":"ContainerStarted","Data":"ca242ed75c919f6859b41a6682aed294e40f4f6a73fa899b6ee5657be05142fa"} Jan 23 18:16:27 crc kubenswrapper[4760]: I0123 18:16:27.355348 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:27 crc kubenswrapper[4760]: I0123 18:16:27.355629 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:27 crc kubenswrapper[4760]: I0123 18:16:27.429176 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:27 crc kubenswrapper[4760]: I0123 18:16:27.477679 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vcqvw" event={"ID":"c57f00f5-beec-44ff-b9cc-83ed33ddc502","Type":"ContainerStarted","Data":"a5af93227679883f1206d7064034bcdbe11535bbdfb94e94ebb64d5673824f38"} Jan 23 18:16:27 crc kubenswrapper[4760]: I0123 18:16:27.502470 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-vcqvw" podStartSLOduration=3.502452999 podStartE2EDuration="3.502452999s" podCreationTimestamp="2026-01-23 18:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:16:27.500114653 +0000 UTC m=+930.502572586" watchObservedRunningTime="2026-01-23 18:16:27.502452999 +0000 UTC m=+930.504910922" Jan 23 18:16:27 crc kubenswrapper[4760]: I0123 18:16:27.532679 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:27 crc kubenswrapper[4760]: I0123 18:16:27.663700 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtpn4"] Jan 23 18:16:28 crc kubenswrapper[4760]: I0123 18:16:28.496375 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vcqvw" event={"ID":"c57f00f5-beec-44ff-b9cc-83ed33ddc502","Type":"ContainerStarted","Data":"614ea00d14336d8027544b44d107b8e0117a6e2f7f2265cd250ec8f0ed3777cc"} Jan 23 18:16:28 crc kubenswrapper[4760]: I0123 18:16:28.497055 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-vcqvw" Jan 23 18:16:29 crc kubenswrapper[4760]: I0123 18:16:29.506792 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wtpn4" podUID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerName="registry-server" containerID="cri-o://0dd1e1aac84d4380e80ecc0088fce31b17be7b95f73d8b8c88a71ba79c408b10" gracePeriod=2 Jan 23 18:16:30 crc kubenswrapper[4760]: I0123 18:16:30.513659 4760 generic.go:334] "Generic (PLEG): container finished" podID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerID="0dd1e1aac84d4380e80ecc0088fce31b17be7b95f73d8b8c88a71ba79c408b10" exitCode=0 Jan 23 18:16:30 crc kubenswrapper[4760]: I0123 18:16:30.513700 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtpn4" event={"ID":"337eff77-ea95-40d0-8a93-32f0b4c5b840","Type":"ContainerDied","Data":"0dd1e1aac84d4380e80ecc0088fce31b17be7b95f73d8b8c88a71ba79c408b10"} Jan 23 18:16:31 crc kubenswrapper[4760]: I0123 18:16:31.876570 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:31 crc kubenswrapper[4760]: I0123 18:16:31.965298 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-catalog-content\") pod \"337eff77-ea95-40d0-8a93-32f0b4c5b840\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " Jan 23 18:16:31 crc kubenswrapper[4760]: I0123 18:16:31.965355 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pkvj\" (UniqueName: \"kubernetes.io/projected/337eff77-ea95-40d0-8a93-32f0b4c5b840-kube-api-access-6pkvj\") pod \"337eff77-ea95-40d0-8a93-32f0b4c5b840\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " Jan 23 18:16:31 crc kubenswrapper[4760]: I0123 18:16:31.965446 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-utilities\") pod \"337eff77-ea95-40d0-8a93-32f0b4c5b840\" (UID: \"337eff77-ea95-40d0-8a93-32f0b4c5b840\") " Jan 23 18:16:31 crc kubenswrapper[4760]: I0123 18:16:31.966344 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-utilities" (OuterVolumeSpecName: "utilities") pod "337eff77-ea95-40d0-8a93-32f0b4c5b840" (UID: "337eff77-ea95-40d0-8a93-32f0b4c5b840"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:16:31 crc kubenswrapper[4760]: I0123 18:16:31.972040 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/337eff77-ea95-40d0-8a93-32f0b4c5b840-kube-api-access-6pkvj" (OuterVolumeSpecName: "kube-api-access-6pkvj") pod "337eff77-ea95-40d0-8a93-32f0b4c5b840" (UID: "337eff77-ea95-40d0-8a93-32f0b4c5b840"). InnerVolumeSpecName "kube-api-access-6pkvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:16:31 crc kubenswrapper[4760]: I0123 18:16:31.988510 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "337eff77-ea95-40d0-8a93-32f0b4c5b840" (UID: "337eff77-ea95-40d0-8a93-32f0b4c5b840"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.067141 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.067201 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/337eff77-ea95-40d0-8a93-32f0b4c5b840-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.067223 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pkvj\" (UniqueName: \"kubernetes.io/projected/337eff77-ea95-40d0-8a93-32f0b4c5b840-kube-api-access-6pkvj\") on node \"crc\" DevicePath \"\"" Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.528932 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" event={"ID":"bdfea4ed-b515-4324-a846-11743d9ae4ab","Type":"ContainerStarted","Data":"1d0bf4824ae4036adc6a218e7536c6b9278ea89836edcdd869e75e02d4f49db1"} Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.529185 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.531341 4760 generic.go:334] "Generic (PLEG): container finished" podID="e951e064-8240-463f-b524-295257f45405" containerID="08ccf9c5c4bae10c40d4c1bb7d1a299bccecb9639ff9cb230dec44ba6e876627" exitCode=0 Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.531421 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cxwr6" event={"ID":"e951e064-8240-463f-b524-295257f45405","Type":"ContainerDied","Data":"08ccf9c5c4bae10c40d4c1bb7d1a299bccecb9639ff9cb230dec44ba6e876627"} Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.534720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtpn4" event={"ID":"337eff77-ea95-40d0-8a93-32f0b4c5b840","Type":"ContainerDied","Data":"c2ddf20499e32aa1b103c2867764d1ce65483a4b27f74b584b56082ee07e1b77"} Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.534807 4760 scope.go:117] "RemoveContainer" containerID="0dd1e1aac84d4380e80ecc0088fce31b17be7b95f73d8b8c88a71ba79c408b10" Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.534904 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtpn4" Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.557869 4760 scope.go:117] "RemoveContainer" containerID="2dd90b3f43b3aa2902e5285b9e2959a4ec18859514af497d7cef49769238f6ce" Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.570839 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" podStartSLOduration=1.564303003 podStartE2EDuration="8.570814835s" podCreationTimestamp="2026-01-23 18:16:24 +0000 UTC" firstStartedPulling="2026-01-23 18:16:24.731267595 +0000 UTC m=+927.733725528" lastFinishedPulling="2026-01-23 18:16:31.737779427 +0000 UTC m=+934.740237360" observedRunningTime="2026-01-23 18:16:32.557545469 +0000 UTC m=+935.560003432" watchObservedRunningTime="2026-01-23 18:16:32.570814835 +0000 UTC m=+935.573272768" Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.600576 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtpn4"] Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.608133 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtpn4"] Jan 23 18:16:32 crc kubenswrapper[4760]: I0123 18:16:32.610806 4760 scope.go:117] "RemoveContainer" containerID="d50b85fbcbb98b41a0e44019d12247cefbaf6ef0ed268f5f9d93605334809ee3" Jan 23 18:16:33 crc kubenswrapper[4760]: I0123 18:16:33.545925 4760 generic.go:334] "Generic (PLEG): container finished" podID="e951e064-8240-463f-b524-295257f45405" containerID="8fdc7c15bd237a7c1a90f98645af3cbb88a4c8441ff29cd05d0e9c082c65ee92" exitCode=0 Jan 23 18:16:33 crc kubenswrapper[4760]: I0123 18:16:33.546036 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cxwr6" event={"ID":"e951e064-8240-463f-b524-295257f45405","Type":"ContainerDied","Data":"8fdc7c15bd237a7c1a90f98645af3cbb88a4c8441ff29cd05d0e9c082c65ee92"} Jan 23 18:16:33 crc kubenswrapper[4760]: I0123 18:16:33.607157 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="337eff77-ea95-40d0-8a93-32f0b4c5b840" path="/var/lib/kubelet/pods/337eff77-ea95-40d0-8a93-32f0b4c5b840/volumes" Jan 23 18:16:34 crc kubenswrapper[4760]: I0123 18:16:34.553968 4760 generic.go:334] "Generic (PLEG): container finished" podID="e951e064-8240-463f-b524-295257f45405" containerID="7cfe589e907fc52852991c3e94fa0a0093d59b84d5a01f48db2fca7a539d8bf0" exitCode=0 Jan 23 18:16:34 crc kubenswrapper[4760]: I0123 18:16:34.554087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cxwr6" event={"ID":"e951e064-8240-463f-b524-295257f45405","Type":"ContainerDied","Data":"7cfe589e907fc52852991c3e94fa0a0093d59b84d5a01f48db2fca7a539d8bf0"} Jan 23 18:16:34 crc kubenswrapper[4760]: I0123 18:16:34.589014 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-5pm7w" Jan 23 18:16:35 crc kubenswrapper[4760]: I0123 18:16:35.568605 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cxwr6" event={"ID":"e951e064-8240-463f-b524-295257f45405","Type":"ContainerStarted","Data":"c4a91b8817e1d4cfd8e80782a83a194b90a09eed61ffdba9cef01441f1480f23"} Jan 23 18:16:35 crc kubenswrapper[4760]: I0123 18:16:35.568891 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cxwr6" event={"ID":"e951e064-8240-463f-b524-295257f45405","Type":"ContainerStarted","Data":"3e607b26db74b3d7e6c71598c5a89aafe68aeb08f25cf4b1729047828639ec4d"} Jan 23 18:16:35 crc kubenswrapper[4760]: I0123 18:16:35.568905 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cxwr6" event={"ID":"e951e064-8240-463f-b524-295257f45405","Type":"ContainerStarted","Data":"589b1f3a2d8ea594ddcd71362f1c5109d09032a5b0208b534e214bc34978be0e"} Jan 23 18:16:35 crc kubenswrapper[4760]: I0123 18:16:35.568916 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cxwr6" event={"ID":"e951e064-8240-463f-b524-295257f45405","Type":"ContainerStarted","Data":"2ba1712d34e9bf45ea04e00481a466e3155a574abdcda796db100c48cf94af4a"} Jan 23 18:16:36 crc kubenswrapper[4760]: I0123 18:16:36.583501 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cxwr6" event={"ID":"e951e064-8240-463f-b524-295257f45405","Type":"ContainerStarted","Data":"aa9129baaaf98b1ced71fe4ef4472f8181a7b8bcee854db0ae0e17c57bb27b99"} Jan 23 18:16:36 crc kubenswrapper[4760]: I0123 18:16:36.583561 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cxwr6" event={"ID":"e951e064-8240-463f-b524-295257f45405","Type":"ContainerStarted","Data":"7b749a00bce460da2aae4d021331f6207edc8fbb8cdd7854fc2ad06fb59352cc"} Jan 23 18:16:36 crc kubenswrapper[4760]: I0123 18:16:36.584333 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:36 crc kubenswrapper[4760]: I0123 18:16:36.622232 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cxwr6" podStartSLOduration=5.550631023 podStartE2EDuration="12.62220665s" podCreationTimestamp="2026-01-23 18:16:24 +0000 UTC" firstStartedPulling="2026-01-23 18:16:24.631044474 +0000 UTC m=+927.633502407" lastFinishedPulling="2026-01-23 18:16:31.702620101 +0000 UTC m=+934.705078034" observedRunningTime="2026-01-23 18:16:36.611385723 +0000 UTC m=+939.613843696" watchObservedRunningTime="2026-01-23 18:16:36.62220665 +0000 UTC m=+939.624664623" Jan 23 18:16:39 crc kubenswrapper[4760]: I0123 18:16:39.492789 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:39 crc kubenswrapper[4760]: I0123 18:16:39.528322 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:44 crc kubenswrapper[4760]: I0123 18:16:44.487745 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lzmgt" Jan 23 18:16:46 crc kubenswrapper[4760]: I0123 18:16:46.066282 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-vcqvw" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.263236 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x2qbb"] Jan 23 18:16:49 crc kubenswrapper[4760]: E0123 18:16:49.263815 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerName="extract-content" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.263832 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerName="extract-content" Jan 23 18:16:49 crc kubenswrapper[4760]: E0123 18:16:49.263850 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerName="registry-server" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.263859 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerName="registry-server" Jan 23 18:16:49 crc kubenswrapper[4760]: E0123 18:16:49.263883 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerName="extract-utilities" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.263893 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerName="extract-utilities" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.264021 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="337eff77-ea95-40d0-8a93-32f0b4c5b840" containerName="registry-server" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.264581 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x2qbb" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.270245 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ft854" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.270365 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.277291 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.300671 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x2qbb"] Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.395686 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlqzg\" (UniqueName: \"kubernetes.io/projected/93fec970-5b48-4016-8439-e6779350b60e-kube-api-access-dlqzg\") pod \"openstack-operator-index-x2qbb\" (UID: \"93fec970-5b48-4016-8439-e6779350b60e\") " pod="openstack-operators/openstack-operator-index-x2qbb" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.497546 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlqzg\" (UniqueName: \"kubernetes.io/projected/93fec970-5b48-4016-8439-e6779350b60e-kube-api-access-dlqzg\") pod \"openstack-operator-index-x2qbb\" (UID: \"93fec970-5b48-4016-8439-e6779350b60e\") " pod="openstack-operators/openstack-operator-index-x2qbb" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.514037 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlqzg\" (UniqueName: \"kubernetes.io/projected/93fec970-5b48-4016-8439-e6779350b60e-kube-api-access-dlqzg\") pod \"openstack-operator-index-x2qbb\" (UID: \"93fec970-5b48-4016-8439-e6779350b60e\") " pod="openstack-operators/openstack-operator-index-x2qbb" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.588911 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x2qbb" Jan 23 18:16:49 crc kubenswrapper[4760]: I0123 18:16:49.830435 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x2qbb"] Jan 23 18:16:50 crc kubenswrapper[4760]: I0123 18:16:50.692911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x2qbb" event={"ID":"93fec970-5b48-4016-8439-e6779350b60e","Type":"ContainerStarted","Data":"136825c13a8b8c8bd88c88a91428cfdbb032f84e0b517e1d3dc41bdb9f748cb1"} Jan 23 18:16:52 crc kubenswrapper[4760]: I0123 18:16:52.627546 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x2qbb"] Jan 23 18:16:52 crc kubenswrapper[4760]: I0123 18:16:52.709102 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x2qbb" event={"ID":"93fec970-5b48-4016-8439-e6779350b60e","Type":"ContainerStarted","Data":"532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56"} Jan 23 18:16:52 crc kubenswrapper[4760]: I0123 18:16:52.728544 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x2qbb" podStartSLOduration=1.709183127 podStartE2EDuration="3.728516714s" podCreationTimestamp="2026-01-23 18:16:49 +0000 UTC" firstStartedPulling="2026-01-23 18:16:49.844640977 +0000 UTC m=+952.847098910" lastFinishedPulling="2026-01-23 18:16:51.863974564 +0000 UTC m=+954.866432497" observedRunningTime="2026-01-23 18:16:52.727912088 +0000 UTC m=+955.730370031" watchObservedRunningTime="2026-01-23 18:16:52.728516714 +0000 UTC m=+955.730974737" Jan 23 18:16:53 crc kubenswrapper[4760]: I0123 18:16:53.237379 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4cxm9"] Jan 23 18:16:53 crc kubenswrapper[4760]: I0123 18:16:53.238054 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4cxm9" Jan 23 18:16:53 crc kubenswrapper[4760]: I0123 18:16:53.251775 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4cxm9"] Jan 23 18:16:53 crc kubenswrapper[4760]: I0123 18:16:53.364688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7xll\" (UniqueName: \"kubernetes.io/projected/572066a4-717d-4bc0-8ef4-146bd33c3768-kube-api-access-p7xll\") pod \"openstack-operator-index-4cxm9\" (UID: \"572066a4-717d-4bc0-8ef4-146bd33c3768\") " pod="openstack-operators/openstack-operator-index-4cxm9" Jan 23 18:16:53 crc kubenswrapper[4760]: I0123 18:16:53.466116 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7xll\" (UniqueName: \"kubernetes.io/projected/572066a4-717d-4bc0-8ef4-146bd33c3768-kube-api-access-p7xll\") pod \"openstack-operator-index-4cxm9\" (UID: \"572066a4-717d-4bc0-8ef4-146bd33c3768\") " pod="openstack-operators/openstack-operator-index-4cxm9" Jan 23 18:16:53 crc kubenswrapper[4760]: I0123 18:16:53.498334 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7xll\" (UniqueName: \"kubernetes.io/projected/572066a4-717d-4bc0-8ef4-146bd33c3768-kube-api-access-p7xll\") pod \"openstack-operator-index-4cxm9\" (UID: \"572066a4-717d-4bc0-8ef4-146bd33c3768\") " pod="openstack-operators/openstack-operator-index-4cxm9" Jan 23 18:16:53 crc kubenswrapper[4760]: I0123 18:16:53.588350 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4cxm9" Jan 23 18:16:53 crc kubenswrapper[4760]: I0123 18:16:53.718063 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-x2qbb" podUID="93fec970-5b48-4016-8439-e6779350b60e" containerName="registry-server" containerID="cri-o://532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56" gracePeriod=2 Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.022001 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4cxm9"] Jan 23 18:16:54 crc kubenswrapper[4760]: W0123 18:16:54.031995 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod572066a4_717d_4bc0_8ef4_146bd33c3768.slice/crio-0ab8257e06e4f768f3c8e3c22feb489938f867791f27d17a57902345263633f9 WatchSource:0}: Error finding container 0ab8257e06e4f768f3c8e3c22feb489938f867791f27d17a57902345263633f9: Status 404 returned error can't find the container with id 0ab8257e06e4f768f3c8e3c22feb489938f867791f27d17a57902345263633f9 Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.063359 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x2qbb" Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.183047 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlqzg\" (UniqueName: \"kubernetes.io/projected/93fec970-5b48-4016-8439-e6779350b60e-kube-api-access-dlqzg\") pod \"93fec970-5b48-4016-8439-e6779350b60e\" (UID: \"93fec970-5b48-4016-8439-e6779350b60e\") " Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.188781 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93fec970-5b48-4016-8439-e6779350b60e-kube-api-access-dlqzg" (OuterVolumeSpecName: "kube-api-access-dlqzg") pod "93fec970-5b48-4016-8439-e6779350b60e" (UID: "93fec970-5b48-4016-8439-e6779350b60e"). InnerVolumeSpecName "kube-api-access-dlqzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.284919 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlqzg\" (UniqueName: \"kubernetes.io/projected/93fec970-5b48-4016-8439-e6779350b60e-kube-api-access-dlqzg\") on node \"crc\" DevicePath \"\"" Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.495899 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cxwr6" Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.726549 4760 generic.go:334] "Generic (PLEG): container finished" podID="93fec970-5b48-4016-8439-e6779350b60e" containerID="532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56" exitCode=0 Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.726595 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x2qbb" Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.726625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x2qbb" event={"ID":"93fec970-5b48-4016-8439-e6779350b60e","Type":"ContainerDied","Data":"532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56"} Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.726658 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x2qbb" event={"ID":"93fec970-5b48-4016-8439-e6779350b60e","Type":"ContainerDied","Data":"136825c13a8b8c8bd88c88a91428cfdbb032f84e0b517e1d3dc41bdb9f748cb1"} Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.726691 4760 scope.go:117] "RemoveContainer" containerID="532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56" Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.728718 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4cxm9" event={"ID":"572066a4-717d-4bc0-8ef4-146bd33c3768","Type":"ContainerStarted","Data":"83af3b278f5359fbdb3f8ca050615ec62640a7bc35de44c73bb620e36c48c52a"} Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.728750 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4cxm9" event={"ID":"572066a4-717d-4bc0-8ef4-146bd33c3768","Type":"ContainerStarted","Data":"0ab8257e06e4f768f3c8e3c22feb489938f867791f27d17a57902345263633f9"} Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.752831 4760 scope.go:117] "RemoveContainer" containerID="532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56" Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.753036 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4cxm9" podStartSLOduration=1.69700008 podStartE2EDuration="1.753021718s" podCreationTimestamp="2026-01-23 18:16:53 +0000 UTC" firstStartedPulling="2026-01-23 18:16:54.036989446 +0000 UTC m=+957.039447389" lastFinishedPulling="2026-01-23 18:16:54.093011064 +0000 UTC m=+957.095469027" observedRunningTime="2026-01-23 18:16:54.749513879 +0000 UTC m=+957.751971812" watchObservedRunningTime="2026-01-23 18:16:54.753021718 +0000 UTC m=+957.755479651" Jan 23 18:16:54 crc kubenswrapper[4760]: E0123 18:16:54.753654 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56\": container with ID starting with 532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56 not found: ID does not exist" containerID="532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56" Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.753704 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56"} err="failed to get container status \"532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56\": rpc error: code = NotFound desc = could not find container \"532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56\": container with ID starting with 532ad143254b9eeff6c864698724b4b7aa342797cb4953ca0cf7a4d523b78e56 not found: ID does not exist" Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.767752 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x2qbb"] Jan 23 18:16:54 crc kubenswrapper[4760]: I0123 18:16:54.773697 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-x2qbb"] Jan 23 18:16:55 crc kubenswrapper[4760]: I0123 18:16:55.602956 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93fec970-5b48-4016-8439-e6779350b60e" path="/var/lib/kubelet/pods/93fec970-5b48-4016-8439-e6779350b60e/volumes" Jan 23 18:17:03 crc kubenswrapper[4760]: I0123 18:17:03.589473 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-4cxm9" Jan 23 18:17:03 crc kubenswrapper[4760]: I0123 18:17:03.589732 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-4cxm9" Jan 23 18:17:03 crc kubenswrapper[4760]: I0123 18:17:03.619121 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-4cxm9" Jan 23 18:17:03 crc kubenswrapper[4760]: I0123 18:17:03.824049 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-4cxm9" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.477105 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7"] Jan 23 18:17:06 crc kubenswrapper[4760]: E0123 18:17:06.477593 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93fec970-5b48-4016-8439-e6779350b60e" containerName="registry-server" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.477608 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fec970-5b48-4016-8439-e6779350b60e" containerName="registry-server" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.477708 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="93fec970-5b48-4016-8439-e6779350b60e" containerName="registry-server" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.478639 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.483540 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nsm68" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.488417 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7"] Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.610525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-util\") pod \"c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.610595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxk9h\" (UniqueName: \"kubernetes.io/projected/f45d762a-3d92-4bd5-8e93-6a940ef50517-kube-api-access-qxk9h\") pod \"c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.610696 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-bundle\") pod \"c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.712358 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-bundle\") pod \"c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.712502 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-util\") pod \"c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.713217 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxk9h\" (UniqueName: \"kubernetes.io/projected/f45d762a-3d92-4bd5-8e93-6a940ef50517-kube-api-access-qxk9h\") pod \"c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.713377 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-util\") pod \"c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.713392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-bundle\") pod \"c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.737207 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxk9h\" (UniqueName: \"kubernetes.io/projected/f45d762a-3d92-4bd5-8e93-6a940ef50517-kube-api-access-qxk9h\") pod \"c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:06 crc kubenswrapper[4760]: I0123 18:17:06.800141 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:07 crc kubenswrapper[4760]: I0123 18:17:07.210694 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7"] Jan 23 18:17:07 crc kubenswrapper[4760]: I0123 18:17:07.818013 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" event={"ID":"f45d762a-3d92-4bd5-8e93-6a940ef50517","Type":"ContainerStarted","Data":"bd21236ae0825bb40af5e0683ca46649c3140970cc709fb36be06cf00de596d6"} Jan 23 18:17:07 crc kubenswrapper[4760]: I0123 18:17:07.818500 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" event={"ID":"f45d762a-3d92-4bd5-8e93-6a940ef50517","Type":"ContainerStarted","Data":"9f54f9458f38e8e85d9059efa395a6d752ad091b8df8936b800d689320695299"} Jan 23 18:17:08 crc kubenswrapper[4760]: I0123 18:17:08.826654 4760 generic.go:334] "Generic (PLEG): container finished" podID="f45d762a-3d92-4bd5-8e93-6a940ef50517" containerID="bd21236ae0825bb40af5e0683ca46649c3140970cc709fb36be06cf00de596d6" exitCode=0 Jan 23 18:17:08 crc kubenswrapper[4760]: I0123 18:17:08.826828 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" event={"ID":"f45d762a-3d92-4bd5-8e93-6a940ef50517","Type":"ContainerDied","Data":"bd21236ae0825bb40af5e0683ca46649c3140970cc709fb36be06cf00de596d6"} Jan 23 18:17:09 crc kubenswrapper[4760]: I0123 18:17:09.837025 4760 generic.go:334] "Generic (PLEG): container finished" podID="f45d762a-3d92-4bd5-8e93-6a940ef50517" containerID="f66da5a0a466abb950a72e70f9c1443cb1a1544f001765880ed3caeed10fac2c" exitCode=0 Jan 23 18:17:09 crc kubenswrapper[4760]: I0123 18:17:09.837127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" event={"ID":"f45d762a-3d92-4bd5-8e93-6a940ef50517","Type":"ContainerDied","Data":"f66da5a0a466abb950a72e70f9c1443cb1a1544f001765880ed3caeed10fac2c"} Jan 23 18:17:10 crc kubenswrapper[4760]: I0123 18:17:10.846519 4760 generic.go:334] "Generic (PLEG): container finished" podID="f45d762a-3d92-4bd5-8e93-6a940ef50517" containerID="01e1fe50b7d2127969746f4de01e636e09dd4997fb1648d1da98b06dcec96971" exitCode=0 Jan 23 18:17:10 crc kubenswrapper[4760]: I0123 18:17:10.846564 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" event={"ID":"f45d762a-3d92-4bd5-8e93-6a940ef50517","Type":"ContainerDied","Data":"01e1fe50b7d2127969746f4de01e636e09dd4997fb1648d1da98b06dcec96971"} Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.229654 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.292848 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-bundle\") pod \"f45d762a-3d92-4bd5-8e93-6a940ef50517\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.292972 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-util\") pod \"f45d762a-3d92-4bd5-8e93-6a940ef50517\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.293168 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxk9h\" (UniqueName: \"kubernetes.io/projected/f45d762a-3d92-4bd5-8e93-6a940ef50517-kube-api-access-qxk9h\") pod \"f45d762a-3d92-4bd5-8e93-6a940ef50517\" (UID: \"f45d762a-3d92-4bd5-8e93-6a940ef50517\") " Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.294129 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-bundle" (OuterVolumeSpecName: "bundle") pod "f45d762a-3d92-4bd5-8e93-6a940ef50517" (UID: "f45d762a-3d92-4bd5-8e93-6a940ef50517"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.301399 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45d762a-3d92-4bd5-8e93-6a940ef50517-kube-api-access-qxk9h" (OuterVolumeSpecName: "kube-api-access-qxk9h") pod "f45d762a-3d92-4bd5-8e93-6a940ef50517" (UID: "f45d762a-3d92-4bd5-8e93-6a940ef50517"). InnerVolumeSpecName "kube-api-access-qxk9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.312522 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-util" (OuterVolumeSpecName: "util") pod "f45d762a-3d92-4bd5-8e93-6a940ef50517" (UID: "f45d762a-3d92-4bd5-8e93-6a940ef50517"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.394979 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.395001 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f45d762a-3d92-4bd5-8e93-6a940ef50517-util\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.395010 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxk9h\" (UniqueName: \"kubernetes.io/projected/f45d762a-3d92-4bd5-8e93-6a940ef50517-kube-api-access-qxk9h\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.865259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" event={"ID":"f45d762a-3d92-4bd5-8e93-6a940ef50517","Type":"ContainerDied","Data":"9f54f9458f38e8e85d9059efa395a6d752ad091b8df8936b800d689320695299"} Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.865545 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f54f9458f38e8e85d9059efa395a6d752ad091b8df8936b800d689320695299" Jan 23 18:17:12 crc kubenswrapper[4760]: I0123 18:17:12.865431 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7" Jan 23 18:17:16 crc kubenswrapper[4760]: I0123 18:17:16.076264 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:17:16 crc kubenswrapper[4760]: I0123 18:17:16.076787 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:17:18 crc kubenswrapper[4760]: I0123 18:17:18.815776 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z"] Jan 23 18:17:18 crc kubenswrapper[4760]: E0123 18:17:18.816249 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45d762a-3d92-4bd5-8e93-6a940ef50517" containerName="pull" Jan 23 18:17:18 crc kubenswrapper[4760]: I0123 18:17:18.816275 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45d762a-3d92-4bd5-8e93-6a940ef50517" containerName="pull" Jan 23 18:17:18 crc kubenswrapper[4760]: E0123 18:17:18.816309 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45d762a-3d92-4bd5-8e93-6a940ef50517" containerName="util" Jan 23 18:17:18 crc kubenswrapper[4760]: I0123 18:17:18.816326 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45d762a-3d92-4bd5-8e93-6a940ef50517" containerName="util" Jan 23 18:17:18 crc kubenswrapper[4760]: E0123 18:17:18.816370 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45d762a-3d92-4bd5-8e93-6a940ef50517" containerName="extract" Jan 23 18:17:18 crc kubenswrapper[4760]: I0123 18:17:18.816388 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45d762a-3d92-4bd5-8e93-6a940ef50517" containerName="extract" Jan 23 18:17:18 crc kubenswrapper[4760]: I0123 18:17:18.816701 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45d762a-3d92-4bd5-8e93-6a940ef50517" containerName="extract" Jan 23 18:17:18 crc kubenswrapper[4760]: I0123 18:17:18.817581 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z" Jan 23 18:17:18 crc kubenswrapper[4760]: I0123 18:17:18.821893 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5xhch" Jan 23 18:17:18 crc kubenswrapper[4760]: I0123 18:17:18.863126 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z"] Jan 23 18:17:18 crc kubenswrapper[4760]: I0123 18:17:18.883740 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm7fr\" (UniqueName: \"kubernetes.io/projected/e9a6d033-9989-4ea2-a4c1-734f2baa1828-kube-api-access-wm7fr\") pod \"openstack-operator-controller-init-ff567b4f8-wdx4z\" (UID: \"e9a6d033-9989-4ea2-a4c1-734f2baa1828\") " pod="openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z" Jan 23 18:17:18 crc kubenswrapper[4760]: I0123 18:17:18.984740 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm7fr\" (UniqueName: \"kubernetes.io/projected/e9a6d033-9989-4ea2-a4c1-734f2baa1828-kube-api-access-wm7fr\") pod \"openstack-operator-controller-init-ff567b4f8-wdx4z\" (UID: \"e9a6d033-9989-4ea2-a4c1-734f2baa1828\") " pod="openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z" Jan 23 18:17:19 crc kubenswrapper[4760]: I0123 18:17:19.006084 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm7fr\" (UniqueName: \"kubernetes.io/projected/e9a6d033-9989-4ea2-a4c1-734f2baa1828-kube-api-access-wm7fr\") pod \"openstack-operator-controller-init-ff567b4f8-wdx4z\" (UID: \"e9a6d033-9989-4ea2-a4c1-734f2baa1828\") " pod="openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z" Jan 23 18:17:19 crc kubenswrapper[4760]: I0123 18:17:19.155828 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z" Jan 23 18:17:19 crc kubenswrapper[4760]: I0123 18:17:19.614254 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z"] Jan 23 18:17:19 crc kubenswrapper[4760]: I0123 18:17:19.912510 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z" event={"ID":"e9a6d033-9989-4ea2-a4c1-734f2baa1828","Type":"ContainerStarted","Data":"df5625d0f8337d8e1529c14e7443f6712a7758a6b0215db38765c6d59ecc190e"} Jan 23 18:17:24 crc kubenswrapper[4760]: I0123 18:17:24.962853 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z" event={"ID":"e9a6d033-9989-4ea2-a4c1-734f2baa1828","Type":"ContainerStarted","Data":"505f3f1015041a4b4543422346abc7e09e49d9f10b0b71ddbd5bd30c64a2a0d9"} Jan 23 18:17:24 crc kubenswrapper[4760]: I0123 18:17:24.963326 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z" Jan 23 18:17:24 crc kubenswrapper[4760]: I0123 18:17:24.989038 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z" podStartSLOduration=2.578224084 podStartE2EDuration="6.989022758s" podCreationTimestamp="2026-01-23 18:17:18 +0000 UTC" firstStartedPulling="2026-01-23 18:17:19.614957099 +0000 UTC m=+982.617415082" lastFinishedPulling="2026-01-23 18:17:24.025755803 +0000 UTC m=+987.028213756" observedRunningTime="2026-01-23 18:17:24.988074082 +0000 UTC m=+987.990532035" watchObservedRunningTime="2026-01-23 18:17:24.989022758 +0000 UTC m=+987.991480691" Jan 23 18:17:29 crc kubenswrapper[4760]: I0123 18:17:29.160649 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-ff567b4f8-wdx4z" Jan 23 18:17:46 crc kubenswrapper[4760]: I0123 18:17:46.075488 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:17:46 crc kubenswrapper[4760]: I0123 18:17:46.076053 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.445123 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kcfbv"] Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.446392 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.464310 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcfbv"] Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.504932 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-utilities\") pod \"certified-operators-kcfbv\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.505031 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnx2n\" (UniqueName: \"kubernetes.io/projected/141d7aff-3923-4d3a-84bf-6682f127c277-kube-api-access-dnx2n\") pod \"certified-operators-kcfbv\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.505063 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-catalog-content\") pod \"certified-operators-kcfbv\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.608220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-utilities\") pod \"certified-operators-kcfbv\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.608283 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnx2n\" (UniqueName: \"kubernetes.io/projected/141d7aff-3923-4d3a-84bf-6682f127c277-kube-api-access-dnx2n\") pod \"certified-operators-kcfbv\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.608300 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-catalog-content\") pod \"certified-operators-kcfbv\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.608811 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-catalog-content\") pod \"certified-operators-kcfbv\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.608877 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-utilities\") pod \"certified-operators-kcfbv\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.629873 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnx2n\" (UniqueName: \"kubernetes.io/projected/141d7aff-3923-4d3a-84bf-6682f127c277-kube-api-access-dnx2n\") pod \"certified-operators-kcfbv\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.765127 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.935698 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q"] Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.936885 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.944472 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-5m66l" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.952435 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54"] Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.955984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.959847 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-mfw7l" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.971668 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr"] Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.972726 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.976795 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-2bw2r" Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.977140 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q"] Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.985301 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54"] Jan 23 18:17:48 crc kubenswrapper[4760]: I0123 18:17:48.996984 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.000580 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.006664 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vmc6r" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.017331 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.019753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t84lz\" (UniqueName: \"kubernetes.io/projected/3648750a-24fe-4391-8921-66d791485e98-kube-api-access-t84lz\") pod \"barbican-operator-controller-manager-7f86f8796f-5g86q\" (UID: \"3648750a-24fe-4391-8921-66d791485e98\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.019826 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zwkk\" (UniqueName: \"kubernetes.io/projected/fdb3af86-9ecd-45de-8f76-976ff884b581-kube-api-access-5zwkk\") pod \"cinder-operator-controller-manager-69cf5d4557-cql54\" (UID: \"fdb3af86-9ecd-45de-8f76-976ff884b581\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.019851 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkrbl\" (UniqueName: \"kubernetes.io/projected/d0989ccd-5163-46a0-b578-975ba1c31f03-kube-api-access-kkrbl\") pod \"designate-operator-controller-manager-b45d7bf98-j9vqr\" (UID: \"d0989ccd-5163-46a0-b578-975ba1c31f03\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.019982 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74k8d\" (UniqueName: \"kubernetes.io/projected/ef671ec0-a50e-4acd-bd63-31aa36cf3033-kube-api-access-74k8d\") pod \"glance-operator-controller-manager-78fdd796fd-pjz6t\" (UID: \"ef671ec0-a50e-4acd-bd63-31aa36cf3033\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.042653 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.069500 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.070492 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.098737 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.105597 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-w6dp9" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.121212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t84lz\" (UniqueName: \"kubernetes.io/projected/3648750a-24fe-4391-8921-66d791485e98-kube-api-access-t84lz\") pod \"barbican-operator-controller-manager-7f86f8796f-5g86q\" (UID: \"3648750a-24fe-4391-8921-66d791485e98\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.121308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zwkk\" (UniqueName: \"kubernetes.io/projected/fdb3af86-9ecd-45de-8f76-976ff884b581-kube-api-access-5zwkk\") pod \"cinder-operator-controller-manager-69cf5d4557-cql54\" (UID: \"fdb3af86-9ecd-45de-8f76-976ff884b581\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.121341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkrbl\" (UniqueName: \"kubernetes.io/projected/d0989ccd-5163-46a0-b578-975ba1c31f03-kube-api-access-kkrbl\") pod \"designate-operator-controller-manager-b45d7bf98-j9vqr\" (UID: \"d0989ccd-5163-46a0-b578-975ba1c31f03\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.121386 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74k8d\" (UniqueName: \"kubernetes.io/projected/ef671ec0-a50e-4acd-bd63-31aa36cf3033-kube-api-access-74k8d\") pod \"glance-operator-controller-manager-78fdd796fd-pjz6t\" (UID: \"ef671ec0-a50e-4acd-bd63-31aa36cf3033\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.121454 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2zn\" (UniqueName: \"kubernetes.io/projected/0d449643-d693-4591-a0d6-42e8129a3468-kube-api-access-wb2zn\") pod \"heat-operator-controller-manager-594c8c9d5d-ljpl4\" (UID: \"0d449643-d693-4591-a0d6-42e8129a3468\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.126890 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.127733 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.141099 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t84lz\" (UniqueName: \"kubernetes.io/projected/3648750a-24fe-4391-8921-66d791485e98-kube-api-access-t84lz\") pod \"barbican-operator-controller-manager-7f86f8796f-5g86q\" (UID: \"3648750a-24fe-4391-8921-66d791485e98\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.141657 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-msjmf" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.150703 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.153964 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.152906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zwkk\" (UniqueName: \"kubernetes.io/projected/fdb3af86-9ecd-45de-8f76-976ff884b581-kube-api-access-5zwkk\") pod \"cinder-operator-controller-manager-69cf5d4557-cql54\" (UID: \"fdb3af86-9ecd-45de-8f76-976ff884b581\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.158891 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74k8d\" (UniqueName: \"kubernetes.io/projected/ef671ec0-a50e-4acd-bd63-31aa36cf3033-kube-api-access-74k8d\") pod \"glance-operator-controller-manager-78fdd796fd-pjz6t\" (UID: \"ef671ec0-a50e-4acd-bd63-31aa36cf3033\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.169812 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.170493 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-b9skj" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.175603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkrbl\" (UniqueName: \"kubernetes.io/projected/d0989ccd-5163-46a0-b578-975ba1c31f03-kube-api-access-kkrbl\") pod \"designate-operator-controller-manager-b45d7bf98-j9vqr\" (UID: \"d0989ccd-5163-46a0-b578-975ba1c31f03\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.175685 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.204161 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.205173 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.207889 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-78ld5" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.222853 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.222942 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss2bn\" (UniqueName: \"kubernetes.io/projected/fcc9617c-e7aa-4707-bcaf-1492e3e0fee6-kube-api-access-ss2bn\") pod \"horizon-operator-controller-manager-77d5c5b54f-8w8lt\" (UID: \"fcc9617c-e7aa-4707-bcaf-1492e3e0fee6\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.222982 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsqsk\" (UniqueName: \"kubernetes.io/projected/f56403a2-dc6e-4362-99c2-669531fd3d8d-kube-api-access-nsqsk\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.223052 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2zn\" (UniqueName: \"kubernetes.io/projected/0d449643-d693-4591-a0d6-42e8129a3468-kube-api-access-wb2zn\") pod \"heat-operator-controller-manager-594c8c9d5d-ljpl4\" (UID: \"0d449643-d693-4591-a0d6-42e8129a3468\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.223095 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x9hz\" (UniqueName: \"kubernetes.io/projected/58df2b6d-bc85-4266-bc2c-143cd52efc28-kube-api-access-7x9hz\") pod \"ironic-operator-controller-manager-598f7747c9-vvcd8\" (UID: \"58df2b6d-bc85-4266-bc2c-143cd52efc28\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.225054 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.239792 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.248789 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.249551 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.252835 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-c45hq" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.267000 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.269950 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.270336 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2zn\" (UniqueName: \"kubernetes.io/projected/0d449643-d693-4591-a0d6-42e8129a3468-kube-api-access-wb2zn\") pod \"heat-operator-controller-manager-594c8c9d5d-ljpl4\" (UID: \"0d449643-d693-4591-a0d6-42e8129a3468\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.270990 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.289401 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.305432 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.323834 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-blf64" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.324202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.324291 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss2bn\" (UniqueName: \"kubernetes.io/projected/fcc9617c-e7aa-4707-bcaf-1492e3e0fee6-kube-api-access-ss2bn\") pod \"horizon-operator-controller-manager-77d5c5b54f-8w8lt\" (UID: \"fcc9617c-e7aa-4707-bcaf-1492e3e0fee6\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.324337 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsqsk\" (UniqueName: \"kubernetes.io/projected/f56403a2-dc6e-4362-99c2-669531fd3d8d-kube-api-access-nsqsk\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.324426 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz824\" (UniqueName: \"kubernetes.io/projected/4d84645c-b378-4acd-a3e5-638c61a3b709-kube-api-access-pz824\") pod \"keystone-operator-controller-manager-b8b6d4659-qf78h\" (UID: \"4d84645c-b378-4acd-a3e5-638c61a3b709\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.324460 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzk9h\" (UniqueName: \"kubernetes.io/projected/8a1115aa-5fc1-4dc1-8752-7d15f984837b-kube-api-access-zzk9h\") pod \"manila-operator-controller-manager-7758cc4469-bczdt\" (UID: \"8a1115aa-5fc1-4dc1-8752-7d15f984837b\") " pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.324503 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x9hz\" (UniqueName: \"kubernetes.io/projected/58df2b6d-bc85-4266-bc2c-143cd52efc28-kube-api-access-7x9hz\") pod \"ironic-operator-controller-manager-598f7747c9-vvcd8\" (UID: \"58df2b6d-bc85-4266-bc2c-143cd52efc28\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.324994 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr" Jan 23 18:17:49 crc kubenswrapper[4760]: E0123 18:17:49.325493 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:17:49 crc kubenswrapper[4760]: E0123 18:17:49.325547 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert podName:f56403a2-dc6e-4362-99c2-669531fd3d8d nodeName:}" failed. No retries permitted until 2026-01-23 18:17:49.825528502 +0000 UTC m=+1012.827986435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert") pod "infra-operator-controller-manager-58749ffdfb-cxqww" (UID: "f56403a2-dc6e-4362-99c2-669531fd3d8d") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.325782 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.333247 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.334028 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.337016 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7rp6x" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.346826 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.348990 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.362950 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x9hz\" (UniqueName: \"kubernetes.io/projected/58df2b6d-bc85-4266-bc2c-143cd52efc28-kube-api-access-7x9hz\") pod \"ironic-operator-controller-manager-598f7747c9-vvcd8\" (UID: \"58df2b6d-bc85-4266-bc2c-143cd52efc28\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.363471 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.364440 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.369903 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bcw6c" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.382177 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.384528 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsqsk\" (UniqueName: \"kubernetes.io/projected/f56403a2-dc6e-4362-99c2-669531fd3d8d-kube-api-access-nsqsk\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.386466 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.387710 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.395180 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss2bn\" (UniqueName: \"kubernetes.io/projected/fcc9617c-e7aa-4707-bcaf-1492e3e0fee6-kube-api-access-ss2bn\") pod \"horizon-operator-controller-manager-77d5c5b54f-8w8lt\" (UID: \"fcc9617c-e7aa-4707-bcaf-1492e3e0fee6\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.399723 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.422934 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bkm6f" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.441486 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9m99\" (UniqueName: \"kubernetes.io/projected/b96abc36-760b-4dfb-bc01-80872c59c059-kube-api-access-f9m99\") pod \"neutron-operator-controller-manager-78d58447c5-jpt62\" (UID: \"b96abc36-760b-4dfb-bc01-80872c59c059\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.441867 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz824\" (UniqueName: \"kubernetes.io/projected/4d84645c-b378-4acd-a3e5-638c61a3b709-kube-api-access-pz824\") pod \"keystone-operator-controller-manager-b8b6d4659-qf78h\" (UID: \"4d84645c-b378-4acd-a3e5-638c61a3b709\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.442029 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk96l\" (UniqueName: \"kubernetes.io/projected/285e41c1-c4f8-4978-9a78-ca8d88b45f29-kube-api-access-bk96l\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf\" (UID: \"285e41c1-c4f8-4978-9a78-ca8d88b45f29\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.442213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzk9h\" (UniqueName: \"kubernetes.io/projected/8a1115aa-5fc1-4dc1-8752-7d15f984837b-kube-api-access-zzk9h\") pod \"manila-operator-controller-manager-7758cc4469-bczdt\" (UID: \"8a1115aa-5fc1-4dc1-8752-7d15f984837b\") " pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.442467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fsd2\" (UniqueName: \"kubernetes.io/projected/587a0d90-d644-4501-bc83-ef454dc4b3d9-kube-api-access-9fsd2\") pod \"nova-operator-controller-manager-6b8bc8d87d-jq9l4\" (UID: \"587a0d90-d644-4501-bc83-ef454dc4b3d9\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.447436 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.484615 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.544394 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.548668 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.551072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fsd2\" (UniqueName: \"kubernetes.io/projected/587a0d90-d644-4501-bc83-ef454dc4b3d9-kube-api-access-9fsd2\") pod \"nova-operator-controller-manager-6b8bc8d87d-jq9l4\" (UID: \"587a0d90-d644-4501-bc83-ef454dc4b3d9\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.551153 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9m99\" (UniqueName: \"kubernetes.io/projected/b96abc36-760b-4dfb-bc01-80872c59c059-kube-api-access-f9m99\") pod \"neutron-operator-controller-manager-78d58447c5-jpt62\" (UID: \"b96abc36-760b-4dfb-bc01-80872c59c059\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.551198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk96l\" (UniqueName: \"kubernetes.io/projected/285e41c1-c4f8-4978-9a78-ca8d88b45f29-kube-api-access-bk96l\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf\" (UID: \"285e41c1-c4f8-4978-9a78-ca8d88b45f29\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.551942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.551968 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-bdpz5" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.552359 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.557085 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.558136 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.581662 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.583926 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jcf8j" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.595038 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk96l\" (UniqueName: \"kubernetes.io/projected/285e41c1-c4f8-4978-9a78-ca8d88b45f29-kube-api-access-bk96l\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf\" (UID: \"285e41c1-c4f8-4978-9a78-ca8d88b45f29\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.623147 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz824\" (UniqueName: \"kubernetes.io/projected/4d84645c-b378-4acd-a3e5-638c61a3b709-kube-api-access-pz824\") pod \"keystone-operator-controller-manager-b8b6d4659-qf78h\" (UID: \"4d84645c-b378-4acd-a3e5-638c61a3b709\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.623284 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzk9h\" (UniqueName: \"kubernetes.io/projected/8a1115aa-5fc1-4dc1-8752-7d15f984837b-kube-api-access-zzk9h\") pod \"manila-operator-controller-manager-7758cc4469-bczdt\" (UID: \"8a1115aa-5fc1-4dc1-8752-7d15f984837b\") " pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.625373 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9m99\" (UniqueName: \"kubernetes.io/projected/b96abc36-760b-4dfb-bc01-80872c59c059-kube-api-access-f9m99\") pod \"neutron-operator-controller-manager-78d58447c5-jpt62\" (UID: \"b96abc36-760b-4dfb-bc01-80872c59c059\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.630129 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fsd2\" (UniqueName: \"kubernetes.io/projected/587a0d90-d644-4501-bc83-ef454dc4b3d9-kube-api-access-9fsd2\") pod \"nova-operator-controller-manager-6b8bc8d87d-jq9l4\" (UID: \"587a0d90-d644-4501-bc83-ef454dc4b3d9\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.654742 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhmz2\" (UniqueName: \"kubernetes.io/projected/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-kube-api-access-qhmz2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.654803 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.654860 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm58l\" (UniqueName: \"kubernetes.io/projected/89d52854-e7b7-4eba-b990-49a971674ab5-kube-api-access-fm58l\") pod \"octavia-operator-controller-manager-7bd9774b6-ngqsw\" (UID: \"89d52854-e7b7-4eba-b990-49a971674ab5\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.655428 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.685494 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.686321 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.699761 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.715909 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.715958 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-8bzdj" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.716566 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.717962 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.724738 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9chtp" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.760003 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm58l\" (UniqueName: \"kubernetes.io/projected/89d52854-e7b7-4eba-b990-49a971674ab5-kube-api-access-fm58l\") pod \"octavia-operator-controller-manager-7bd9774b6-ngqsw\" (UID: \"89d52854-e7b7-4eba-b990-49a971674ab5\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.760075 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lccpn\" (UniqueName: \"kubernetes.io/projected/0dab320d-061f-43f2-9e57-1c94b958522a-kube-api-access-lccpn\") pod \"placement-operator-controller-manager-5d646b7d76-lzv66\" (UID: \"0dab320d-061f-43f2-9e57-1c94b958522a\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.760096 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7th9\" (UniqueName: \"kubernetes.io/projected/bb6317fe-84f6-4921-9286-6b1aadd6d038-kube-api-access-f7th9\") pod \"ovn-operator-controller-manager-55db956ddc-tbdng\" (UID: \"bb6317fe-84f6-4921-9286-6b1aadd6d038\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.760139 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhmz2\" (UniqueName: \"kubernetes.io/projected/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-kube-api-access-qhmz2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.760177 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:17:49 crc kubenswrapper[4760]: E0123 18:17:49.760285 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:17:49 crc kubenswrapper[4760]: E0123 18:17:49.760324 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert podName:ed6619a3-ea05-44ae-880e-c9ba87fb93f9 nodeName:}" failed. No retries permitted until 2026-01-23 18:17:50.260310555 +0000 UTC m=+1013.262768488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" (UID: "ed6619a3-ea05-44ae-880e-c9ba87fb93f9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.770239 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcfbv"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.799949 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhmz2\" (UniqueName: \"kubernetes.io/projected/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-kube-api-access-qhmz2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.800112 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.807908 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm58l\" (UniqueName: \"kubernetes.io/projected/89d52854-e7b7-4eba-b990-49a971674ab5-kube-api-access-fm58l\") pod \"octavia-operator-controller-manager-7bd9774b6-ngqsw\" (UID: \"89d52854-e7b7-4eba-b990-49a971674ab5\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.833500 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.834680 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.838375 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-95h7x" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.852889 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.861072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lccpn\" (UniqueName: \"kubernetes.io/projected/0dab320d-061f-43f2-9e57-1c94b958522a-kube-api-access-lccpn\") pod \"placement-operator-controller-manager-5d646b7d76-lzv66\" (UID: \"0dab320d-061f-43f2-9e57-1c94b958522a\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.861114 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7th9\" (UniqueName: \"kubernetes.io/projected/bb6317fe-84f6-4921-9286-6b1aadd6d038-kube-api-access-f7th9\") pod \"ovn-operator-controller-manager-55db956ddc-tbdng\" (UID: \"bb6317fe-84f6-4921-9286-6b1aadd6d038\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.861146 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.861167 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gpc8\" (UniqueName: \"kubernetes.io/projected/88c1fb15-33fa-40cf-afa9-068d281bbed5-kube-api-access-2gpc8\") pod \"swift-operator-controller-manager-547cbdb99f-kmclp\" (UID: \"88c1fb15-33fa-40cf-afa9-068d281bbed5\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" Jan 23 18:17:49 crc kubenswrapper[4760]: E0123 18:17:49.861475 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:17:49 crc kubenswrapper[4760]: E0123 18:17:49.861544 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert podName:f56403a2-dc6e-4362-99c2-669531fd3d8d nodeName:}" failed. No retries permitted until 2026-01-23 18:17:50.861526713 +0000 UTC m=+1013.863984656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert") pod "infra-operator-controller-manager-58749ffdfb-cxqww" (UID: "f56403a2-dc6e-4362-99c2-669531fd3d8d") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.861956 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.874098 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.874952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.877718 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.888320 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7mghc" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.898728 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.899230 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.899735 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.901543 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jmzfm" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.914721 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.915131 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.921126 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lccpn\" (UniqueName: \"kubernetes.io/projected/0dab320d-061f-43f2-9e57-1c94b958522a-kube-api-access-lccpn\") pod \"placement-operator-controller-manager-5d646b7d76-lzv66\" (UID: \"0dab320d-061f-43f2-9e57-1c94b958522a\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.924168 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.925252 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7th9\" (UniqueName: \"kubernetes.io/projected/bb6317fe-84f6-4921-9286-6b1aadd6d038-kube-api-access-f7th9\") pod \"ovn-operator-controller-manager-55db956ddc-tbdng\" (UID: \"bb6317fe-84f6-4921-9286-6b1aadd6d038\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.942374 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.960172 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws"] Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.961094 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.962125 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qwr6\" (UniqueName: \"kubernetes.io/projected/9d6049ab-b6ff-41e2-8e37-f3c2102d5ab0-kube-api-access-8qwr6\") pod \"test-operator-controller-manager-69797bbcbd-7bc5b\" (UID: \"9d6049ab-b6ff-41e2-8e37-f3c2102d5ab0\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.962213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gpc8\" (UniqueName: \"kubernetes.io/projected/88c1fb15-33fa-40cf-afa9-068d281bbed5-kube-api-access-2gpc8\") pod \"swift-operator-controller-manager-547cbdb99f-kmclp\" (UID: \"88c1fb15-33fa-40cf-afa9-068d281bbed5\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.962245 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfwzc\" (UniqueName: \"kubernetes.io/projected/78a244f9-feb4-4df5-b5ec-7bb09185e655-kube-api-access-lfwzc\") pod \"telemetry-operator-controller-manager-85cd9769bb-nxws7\" (UID: \"78a244f9-feb4-4df5-b5ec-7bb09185e655\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.969228 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7sbbl" Jan 23 18:17:49 crc kubenswrapper[4760]: I0123 18:17:49.990893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gpc8\" (UniqueName: \"kubernetes.io/projected/88c1fb15-33fa-40cf-afa9-068d281bbed5-kube-api-access-2gpc8\") pod \"swift-operator-controller-manager-547cbdb99f-kmclp\" (UID: \"88c1fb15-33fa-40cf-afa9-068d281bbed5\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.001492 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.080985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qwr6\" (UniqueName: \"kubernetes.io/projected/9d6049ab-b6ff-41e2-8e37-f3c2102d5ab0-kube-api-access-8qwr6\") pod \"test-operator-controller-manager-69797bbcbd-7bc5b\" (UID: \"9d6049ab-b6ff-41e2-8e37-f3c2102d5ab0\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.081099 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfwzc\" (UniqueName: \"kubernetes.io/projected/78a244f9-feb4-4df5-b5ec-7bb09185e655-kube-api-access-lfwzc\") pod \"telemetry-operator-controller-manager-85cd9769bb-nxws7\" (UID: \"78a244f9-feb4-4df5-b5ec-7bb09185e655\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.087271 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.092329 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.111105 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qwr6\" (UniqueName: \"kubernetes.io/projected/9d6049ab-b6ff-41e2-8e37-f3c2102d5ab0-kube-api-access-8qwr6\") pod \"test-operator-controller-manager-69797bbcbd-7bc5b\" (UID: \"9d6049ab-b6ff-41e2-8e37-f3c2102d5ab0\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.120132 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfwzc\" (UniqueName: \"kubernetes.io/projected/78a244f9-feb4-4df5-b5ec-7bb09185e655-kube-api-access-lfwzc\") pod \"telemetry-operator-controller-manager-85cd9769bb-nxws7\" (UID: \"78a244f9-feb4-4df5-b5ec-7bb09185e655\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.126320 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.127190 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.132014 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fqhsc" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.132059 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.132234 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.137961 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.152543 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.153964 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.162399 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7mqpg" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.165623 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.165660 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcfbv" event={"ID":"141d7aff-3923-4d3a-84bf-6682f127c277","Type":"ContainerDied","Data":"fd342a71df3647893287d17cf5ce63949b1b43765cea725a57b4f7131ec55ab1"} Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.165543 4760 generic.go:334] "Generic (PLEG): container finished" podID="141d7aff-3923-4d3a-84bf-6682f127c277" containerID="fd342a71df3647893287d17cf5ce63949b1b43765cea725a57b4f7131ec55ab1" exitCode=0 Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.167532 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcfbv" event={"ID":"141d7aff-3923-4d3a-84bf-6682f127c277","Type":"ContainerStarted","Data":"120d40a3229ff584b8474ef3dfc565731bce6d03ff1ee984971f47a2ed4792ae"} Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.184039 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9mq7\" (UniqueName: \"kubernetes.io/projected/807135e7-1ace-4928-be9b-82b8a58464fe-kube-api-access-z9mq7\") pod \"watcher-operator-controller-manager-6d9458688d-r2pws\" (UID: \"807135e7-1ace-4928-be9b-82b8a58464fe\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.285762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.286038 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.286070 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.286095 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgnmc\" (UniqueName: \"kubernetes.io/projected/f5c3fafa-733d-4719-89f5-afd3c885919e-kube-api-access-lgnmc\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.286146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxgkb\" (UniqueName: \"kubernetes.io/projected/ec704d93-0ca4-4d63-a123-dbb5a62bffed-kube-api-access-qxgkb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j24jt\" (UID: \"ec704d93-0ca4-4d63-a123-dbb5a62bffed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.286220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9mq7\" (UniqueName: \"kubernetes.io/projected/807135e7-1ace-4928-be9b-82b8a58464fe-kube-api-access-z9mq7\") pod \"watcher-operator-controller-manager-6d9458688d-r2pws\" (UID: \"807135e7-1ace-4928-be9b-82b8a58464fe\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.286749 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.286796 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert podName:ed6619a3-ea05-44ae-880e-c9ba87fb93f9 nodeName:}" failed. No retries permitted until 2026-01-23 18:17:51.286778732 +0000 UTC m=+1014.289236675 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" (UID: "ed6619a3-ea05-44ae-880e-c9ba87fb93f9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.287123 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.311208 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9mq7\" (UniqueName: \"kubernetes.io/projected/807135e7-1ace-4928-be9b-82b8a58464fe-kube-api-access-z9mq7\") pod \"watcher-operator-controller-manager-6d9458688d-r2pws\" (UID: \"807135e7-1ace-4928-be9b-82b8a58464fe\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.312448 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.331468 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" Jan 23 18:17:50 crc kubenswrapper[4760]: W0123 18:17:50.338021 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3648750a_24fe_4391_8921_66d791485e98.slice/crio-0b275aeca8f17569bad0065b88944042e07fde139c90a74c797a10a7545d1302 WatchSource:0}: Error finding container 0b275aeca8f17569bad0065b88944042e07fde139c90a74c797a10a7545d1302: Status 404 returned error can't find the container with id 0b275aeca8f17569bad0065b88944042e07fde139c90a74c797a10a7545d1302 Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.383729 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.388227 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxgkb\" (UniqueName: \"kubernetes.io/projected/ec704d93-0ca4-4d63-a123-dbb5a62bffed-kube-api-access-qxgkb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j24jt\" (UID: \"ec704d93-0ca4-4d63-a123-dbb5a62bffed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.388388 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.388464 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.388497 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgnmc\" (UniqueName: \"kubernetes.io/projected/f5c3fafa-733d-4719-89f5-afd3c885919e-kube-api-access-lgnmc\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.388617 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.388626 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.388689 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:17:50.888669528 +0000 UTC m=+1013.891127461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "webhook-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.388733 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:17:50.88872037 +0000 UTC m=+1013.891178303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "metrics-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.410122 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxgkb\" (UniqueName: \"kubernetes.io/projected/ec704d93-0ca4-4d63-a123-dbb5a62bffed-kube-api-access-qxgkb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j24jt\" (UID: \"ec704d93-0ca4-4d63-a123-dbb5a62bffed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.410358 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgnmc\" (UniqueName: \"kubernetes.io/projected/f5c3fafa-733d-4719-89f5-afd3c885919e-kube-api-access-lgnmc\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.421142 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.549788 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.568918 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t"] Jan 23 18:17:50 crc kubenswrapper[4760]: W0123 18:17:50.571521 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef671ec0_a50e_4acd_bd63_31aa36cf3033.slice/crio-60f68dff41284515519928f5a34d4e853c173d87470e68e474ae17d1f9ebf550 WatchSource:0}: Error finding container 60f68dff41284515519928f5a34d4e853c173d87470e68e474ae17d1f9ebf550: Status 404 returned error can't find the container with id 60f68dff41284515519928f5a34d4e853c173d87470e68e474ae17d1f9ebf550 Jan 23 18:17:50 crc kubenswrapper[4760]: W0123 18:17:50.572238 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0989ccd_5163_46a0_b578_975ba1c31f03.slice/crio-d4d12e97177ffa1ec25abff0855212fa376da9187dd11a8b33e2e72d49e3b382 WatchSource:0}: Error finding container d4d12e97177ffa1ec25abff0855212fa376da9187dd11a8b33e2e72d49e3b382: Status 404 returned error can't find the container with id d4d12e97177ffa1ec25abff0855212fa376da9187dd11a8b33e2e72d49e3b382 Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.576497 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.583203 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54"] Jan 23 18:17:50 crc kubenswrapper[4760]: W0123 18:17:50.584839 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdb3af86_9ecd_45de_8f76_976ff884b581.slice/crio-6b9dae9fa83e4984a8c66f3ed74d9429ef73c568cbc109cda803048aba5153b5 WatchSource:0}: Error finding container 6b9dae9fa83e4984a8c66f3ed74d9429ef73c568cbc109cda803048aba5153b5: Status 404 returned error can't find the container with id 6b9dae9fa83e4984a8c66f3ed74d9429ef73c568cbc109cda803048aba5153b5 Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.708960 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8"] Jan 23 18:17:50 crc kubenswrapper[4760]: W0123 18:17:50.716609 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58df2b6d_bc85_4266_bc2c_143cd52efc28.slice/crio-288305ca2feaba3e034058eeac04405dd691e821d5925c43042b92a890fc92dc WatchSource:0}: Error finding container 288305ca2feaba3e034058eeac04405dd691e821d5925c43042b92a890fc92dc: Status 404 returned error can't find the container with id 288305ca2feaba3e034058eeac04405dd691e821d5925c43042b92a890fc92dc Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.717709 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.726449 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt"] Jan 23 18:17:50 crc kubenswrapper[4760]: W0123 18:17:50.727698 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d449643_d693_4591_a0d6_42e8129a3468.slice/crio-5ca50ca3683d4e380dc45dfa5892ff67a15e86bde1d126f75c561d089592c1da WatchSource:0}: Error finding container 5ca50ca3683d4e380dc45dfa5892ff67a15e86bde1d126f75c561d089592c1da: Status 404 returned error can't find the container with id 5ca50ca3683d4e380dc45dfa5892ff67a15e86bde1d126f75c561d089592c1da Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.735166 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.750492 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62"] Jan 23 18:17:50 crc kubenswrapper[4760]: W0123 18:17:50.759110 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dab320d_061f_43f2_9e57_1c94b958522a.slice/crio-a49bc02b56faa0ef095c9a10389823f8d73ab77db2b4de63809b0eb728dfe0c1 WatchSource:0}: Error finding container a49bc02b56faa0ef095c9a10389823f8d73ab77db2b4de63809b0eb728dfe0c1: Status 404 returned error can't find the container with id a49bc02b56faa0ef095c9a10389823f8d73ab77db2b4de63809b0eb728dfe0c1 Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.770915 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.882392 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.887532 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt"] Jan 23 18:17:50 crc kubenswrapper[4760]: W0123 18:17:50.891357 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod587a0d90_d644_4501_bc83_ef454dc4b3d9.slice/crio-92db71b269ac7dd2537e56ba6199fbe8ee8c4deb2669ac99a13ee5fedc74e0a2 WatchSource:0}: Error finding container 92db71b269ac7dd2537e56ba6199fbe8ee8c4deb2669ac99a13ee5fedc74e0a2: Status 404 returned error can't find the container with id 92db71b269ac7dd2537e56ba6199fbe8ee8c4deb2669ac99a13ee5fedc74e0a2 Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.894494 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.905660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.905747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.905775 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.905937 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.905990 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:17:51.905972251 +0000 UTC m=+1014.908430184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "metrics-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.906037 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.906066 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert podName:f56403a2-dc6e-4362-99c2-669531fd3d8d nodeName:}" failed. No retries permitted until 2026-01-23 18:17:52.906058973 +0000 UTC m=+1015.908516906 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert") pod "infra-operator-controller-manager-58749ffdfb-cxqww" (UID: "f56403a2-dc6e-4362-99c2-669531fd3d8d") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.906110 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.906136 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:17:51.906127365 +0000 UTC m=+1014.908585298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "webhook-server-cert" not found Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.909663 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fm58l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-ngqsw_openstack-operators(89d52854-e7b7-4eba-b990-49a971674ab5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.911561 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" podUID="89d52854-e7b7-4eba-b990-49a971674ab5" Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.916668 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng"] Jan 23 18:17:50 crc kubenswrapper[4760]: I0123 18:17:50.921980 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf"] Jan 23 18:17:50 crc kubenswrapper[4760]: W0123 18:17:50.922340 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb6317fe_84f6_4921_9286_6b1aadd6d038.slice/crio-fc68cf19d530f6e31ee67aa976f7c678ad33b184929f1ace4076ce8ac1037a84 WatchSource:0}: Error finding container fc68cf19d530f6e31ee67aa976f7c678ad33b184929f1ace4076ce8ac1037a84: Status 404 returned error can't find the container with id fc68cf19d530f6e31ee67aa976f7c678ad33b184929f1ace4076ce8ac1037a84 Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.925208 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f7th9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-tbdng_openstack-operators(bb6317fe-84f6-4921-9286-6b1aadd6d038): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.926304 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" podUID="bb6317fe-84f6-4921-9286-6b1aadd6d038" Jan 23 18:17:50 crc kubenswrapper[4760]: W0123 18:17:50.926909 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod285e41c1_c4f8_4978_9a78_ca8d88b45f29.slice/crio-762e9adcd940046366c0cd28758fa1ede8c896d4544f130d3500b4cc8bc1b3a2 WatchSource:0}: Error finding container 762e9adcd940046366c0cd28758fa1ede8c896d4544f130d3500b4cc8bc1b3a2: Status 404 returned error can't find the container with id 762e9adcd940046366c0cd28758fa1ede8c896d4544f130d3500b4cc8bc1b3a2 Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.929471 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bk96l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf_openstack-operators(285e41c1-c4f8-4978-9a78-ca8d88b45f29): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 18:17:50 crc kubenswrapper[4760]: E0123 18:17:50.930999 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" podUID="285e41c1-c4f8-4978-9a78-ca8d88b45f29" Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.024723 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp"] Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.032374 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b"] Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.040015 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7"] Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.046492 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws"] Jan 23 18:17:51 crc kubenswrapper[4760]: W0123 18:17:51.049134 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d6049ab_b6ff_41e2_8e37_f3c2102d5ab0.slice/crio-728d59ef79f53fc93f373df8331e41006cd71d2bbcea60470afb71622244708f WatchSource:0}: Error finding container 728d59ef79f53fc93f373df8331e41006cd71d2bbcea60470afb71622244708f: Status 404 returned error can't find the container with id 728d59ef79f53fc93f373df8331e41006cd71d2bbcea60470afb71622244708f Jan 23 18:17:51 crc kubenswrapper[4760]: W0123 18:17:51.051489 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c1fb15_33fa_40cf_afa9_068d281bbed5.slice/crio-f8818bc928b67c3f62bd579935c54ec7229d443d854f1873afa9035962221c8a WatchSource:0}: Error finding container f8818bc928b67c3f62bd579935c54ec7229d443d854f1873afa9035962221c8a: Status 404 returned error can't find the container with id f8818bc928b67c3f62bd579935c54ec7229d443d854f1873afa9035962221c8a Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.051869 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt"] Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.052865 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2gpc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-kmclp_openstack-operators(88c1fb15-33fa-40cf-afa9-068d281bbed5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.053630 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lfwzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-nxws7_openstack-operators(78a244f9-feb4-4df5-b5ec-7bb09185e655): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.054710 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" podUID="88c1fb15-33fa-40cf-afa9-068d281bbed5" Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.054747 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" podUID="78a244f9-feb4-4df5-b5ec-7bb09185e655" Jan 23 18:17:51 crc kubenswrapper[4760]: W0123 18:17:51.055598 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod807135e7_1ace_4928_be9b_82b8a58464fe.slice/crio-783e5850c1970372fc681a10e713b8ac2432244cf91587e479180c4295ac1c44 WatchSource:0}: Error finding container 783e5850c1970372fc681a10e713b8ac2432244cf91587e479180c4295ac1c44: Status 404 returned error can't find the container with id 783e5850c1970372fc681a10e713b8ac2432244cf91587e479180c4295ac1c44 Jan 23 18:17:51 crc kubenswrapper[4760]: W0123 18:17:51.057922 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec704d93_0ca4_4d63_a123_dbb5a62bffed.slice/crio-5f686e408334bb6dc600f464ca545912f4962b3bec006dc62a5e58ab961b8082 WatchSource:0}: Error finding container 5f686e408334bb6dc600f464ca545912f4962b3bec006dc62a5e58ab961b8082: Status 404 returned error can't find the container with id 5f686e408334bb6dc600f464ca545912f4962b3bec006dc62a5e58ab961b8082 Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.062253 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxgkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-j24jt_openstack-operators(ec704d93-0ca4-4d63-a123-dbb5a62bffed): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.062641 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9mq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d9458688d-r2pws_openstack-operators(807135e7-1ace-4928-be9b-82b8a58464fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.063750 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" podUID="807135e7-1ace-4928-be9b-82b8a58464fe" Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.063767 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" podUID="ec704d93-0ca4-4d63-a123-dbb5a62bffed" Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.175114 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t" event={"ID":"ef671ec0-a50e-4acd-bd63-31aa36cf3033","Type":"ContainerStarted","Data":"60f68dff41284515519928f5a34d4e853c173d87470e68e474ae17d1f9ebf550"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.177926 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" event={"ID":"4d84645c-b378-4acd-a3e5-638c61a3b709","Type":"ContainerStarted","Data":"ad9c4a969b3c08bdf3fe1b1a8670a64c348cdf4cd651bdfe39fc839c101c421b"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.179446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" event={"ID":"bb6317fe-84f6-4921-9286-6b1aadd6d038","Type":"ContainerStarted","Data":"fc68cf19d530f6e31ee67aa976f7c678ad33b184929f1ace4076ce8ac1037a84"} Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.181209 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" podUID="bb6317fe-84f6-4921-9286-6b1aadd6d038" Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.183009 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66" event={"ID":"0dab320d-061f-43f2-9e57-1c94b958522a","Type":"ContainerStarted","Data":"a49bc02b56faa0ef095c9a10389823f8d73ab77db2b4de63809b0eb728dfe0c1"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.185313 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4" event={"ID":"0d449643-d693-4591-a0d6-42e8129a3468","Type":"ContainerStarted","Data":"5ca50ca3683d4e380dc45dfa5892ff67a15e86bde1d126f75c561d089592c1da"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.187728 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62" event={"ID":"b96abc36-760b-4dfb-bc01-80872c59c059","Type":"ContainerStarted","Data":"eae454afddaeed2a78091c613cc581ab116e1388eec34152df13689952dcdc51"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.189318 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr" event={"ID":"d0989ccd-5163-46a0-b578-975ba1c31f03","Type":"ContainerStarted","Data":"d4d12e97177ffa1ec25abff0855212fa376da9187dd11a8b33e2e72d49e3b382"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.192909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" event={"ID":"78a244f9-feb4-4df5-b5ec-7bb09185e655","Type":"ContainerStarted","Data":"3f4f6e76b9f0f64e28090cbd80e76f2ef9fb84b88707a9e8852658d1e6713b57"} Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.198473 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" podUID="78a244f9-feb4-4df5-b5ec-7bb09185e655" Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.199650 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" event={"ID":"ec704d93-0ca4-4d63-a123-dbb5a62bffed","Type":"ContainerStarted","Data":"5f686e408334bb6dc600f464ca545912f4962b3bec006dc62a5e58ab961b8082"} Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.203113 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" podUID="ec704d93-0ca4-4d63-a123-dbb5a62bffed" Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.203166 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" event={"ID":"285e41c1-c4f8-4978-9a78-ca8d88b45f29","Type":"ContainerStarted","Data":"762e9adcd940046366c0cd28758fa1ede8c896d4544f130d3500b4cc8bc1b3a2"} Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.210246 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" podUID="285e41c1-c4f8-4978-9a78-ca8d88b45f29" Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.211876 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b" event={"ID":"9d6049ab-b6ff-41e2-8e37-f3c2102d5ab0","Type":"ContainerStarted","Data":"728d59ef79f53fc93f373df8331e41006cd71d2bbcea60470afb71622244708f"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.219918 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" event={"ID":"807135e7-1ace-4928-be9b-82b8a58464fe","Type":"ContainerStarted","Data":"783e5850c1970372fc681a10e713b8ac2432244cf91587e479180c4295ac1c44"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.226836 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4" event={"ID":"587a0d90-d644-4501-bc83-ef454dc4b3d9","Type":"ContainerStarted","Data":"92db71b269ac7dd2537e56ba6199fbe8ee8c4deb2669ac99a13ee5fedc74e0a2"} Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.227013 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" podUID="807135e7-1ace-4928-be9b-82b8a58464fe" Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.233985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54" event={"ID":"fdb3af86-9ecd-45de-8f76-976ff884b581","Type":"ContainerStarted","Data":"6b9dae9fa83e4984a8c66f3ed74d9429ef73c568cbc109cda803048aba5153b5"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.237231 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8" event={"ID":"58df2b6d-bc85-4266-bc2c-143cd52efc28","Type":"ContainerStarted","Data":"288305ca2feaba3e034058eeac04405dd691e821d5925c43042b92a890fc92dc"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.246958 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" event={"ID":"88c1fb15-33fa-40cf-afa9-068d281bbed5","Type":"ContainerStarted","Data":"f8818bc928b67c3f62bd579935c54ec7229d443d854f1873afa9035962221c8a"} Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.248375 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" podUID="88c1fb15-33fa-40cf-afa9-068d281bbed5" Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.249126 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q" event={"ID":"3648750a-24fe-4391-8921-66d791485e98","Type":"ContainerStarted","Data":"0b275aeca8f17569bad0065b88944042e07fde139c90a74c797a10a7545d1302"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.253344 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt" event={"ID":"fcc9617c-e7aa-4707-bcaf-1492e3e0fee6","Type":"ContainerStarted","Data":"d3cafd8ea6f0ee388eaa893dd022eeeb566f65970924bd591ed63d5aefe3258f"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.258684 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" event={"ID":"8a1115aa-5fc1-4dc1-8752-7d15f984837b","Type":"ContainerStarted","Data":"02645d2a60280939a4b879edb78ea3d82ba9a6b333c3c6de6bbd86b5dab7f769"} Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.269705 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" event={"ID":"89d52854-e7b7-4eba-b990-49a971674ab5","Type":"ContainerStarted","Data":"e72b549e8a8754b0443fd3a19dc041fb3f156e36359223b005598a36f2c14908"} Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.280376 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" podUID="89d52854-e7b7-4eba-b990-49a971674ab5" Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.313199 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.314036 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.314553 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert podName:ed6619a3-ea05-44ae-880e-c9ba87fb93f9 nodeName:}" failed. No retries permitted until 2026-01-23 18:17:53.314535325 +0000 UTC m=+1016.316993258 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" (UID: "ed6619a3-ea05-44ae-880e-c9ba87fb93f9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.923071 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:51 crc kubenswrapper[4760]: I0123 18:17:51.923149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.923997 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.924064 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:17:53.924041826 +0000 UTC m=+1016.926499759 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "webhook-server-cert" not found Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.926346 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:17:51 crc kubenswrapper[4760]: E0123 18:17:51.931434 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:17:53.931294938 +0000 UTC m=+1016.933752871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "metrics-server-cert" not found Jan 23 18:17:52 crc kubenswrapper[4760]: E0123 18:17:52.281237 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" podUID="88c1fb15-33fa-40cf-afa9-068d281bbed5" Jan 23 18:17:52 crc kubenswrapper[4760]: E0123 18:17:52.282248 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" podUID="bb6317fe-84f6-4921-9286-6b1aadd6d038" Jan 23 18:17:52 crc kubenswrapper[4760]: E0123 18:17:52.282317 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" podUID="89d52854-e7b7-4eba-b990-49a971674ab5" Jan 23 18:17:52 crc kubenswrapper[4760]: E0123 18:17:52.286172 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" podUID="285e41c1-c4f8-4978-9a78-ca8d88b45f29" Jan 23 18:17:52 crc kubenswrapper[4760]: E0123 18:17:52.286631 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" podUID="ec704d93-0ca4-4d63-a123-dbb5a62bffed" Jan 23 18:17:52 crc kubenswrapper[4760]: E0123 18:17:52.286700 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" podUID="807135e7-1ace-4928-be9b-82b8a58464fe" Jan 23 18:17:52 crc kubenswrapper[4760]: E0123 18:17:52.287285 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" podUID="78a244f9-feb4-4df5-b5ec-7bb09185e655" Jan 23 18:17:52 crc kubenswrapper[4760]: I0123 18:17:52.940538 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:17:52 crc kubenswrapper[4760]: E0123 18:17:52.940731 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:17:52 crc kubenswrapper[4760]: E0123 18:17:52.940787 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert podName:f56403a2-dc6e-4362-99c2-669531fd3d8d nodeName:}" failed. No retries permitted until 2026-01-23 18:17:56.940771234 +0000 UTC m=+1019.943229167 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert") pod "infra-operator-controller-manager-58749ffdfb-cxqww" (UID: "f56403a2-dc6e-4362-99c2-669531fd3d8d") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:17:53 crc kubenswrapper[4760]: I0123 18:17:53.345446 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:17:53 crc kubenswrapper[4760]: E0123 18:17:53.345687 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:17:53 crc kubenswrapper[4760]: E0123 18:17:53.345755 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert podName:ed6619a3-ea05-44ae-880e-c9ba87fb93f9 nodeName:}" failed. No retries permitted until 2026-01-23 18:17:57.34573179 +0000 UTC m=+1020.348189733 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" (UID: "ed6619a3-ea05-44ae-880e-c9ba87fb93f9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:17:53 crc kubenswrapper[4760]: I0123 18:17:53.953210 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:53 crc kubenswrapper[4760]: I0123 18:17:53.953521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:53 crc kubenswrapper[4760]: E0123 18:17:53.953702 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:17:53 crc kubenswrapper[4760]: E0123 18:17:53.953763 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:17:57.953745459 +0000 UTC m=+1020.956203392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "metrics-server-cert" not found Jan 23 18:17:53 crc kubenswrapper[4760]: E0123 18:17:53.953795 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:17:53 crc kubenswrapper[4760]: E0123 18:17:53.953897 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:17:57.953868732 +0000 UTC m=+1020.956326665 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "webhook-server-cert" not found Jan 23 18:17:57 crc kubenswrapper[4760]: I0123 18:17:57.001839 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:17:57 crc kubenswrapper[4760]: E0123 18:17:57.001990 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:17:57 crc kubenswrapper[4760]: E0123 18:17:57.002461 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert podName:f56403a2-dc6e-4362-99c2-669531fd3d8d nodeName:}" failed. No retries permitted until 2026-01-23 18:18:05.002434472 +0000 UTC m=+1028.004892445 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert") pod "infra-operator-controller-manager-58749ffdfb-cxqww" (UID: "f56403a2-dc6e-4362-99c2-669531fd3d8d") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:17:57 crc kubenswrapper[4760]: I0123 18:17:57.407591 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:17:57 crc kubenswrapper[4760]: E0123 18:17:57.407831 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:17:57 crc kubenswrapper[4760]: E0123 18:17:57.407920 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert podName:ed6619a3-ea05-44ae-880e-c9ba87fb93f9 nodeName:}" failed. No retries permitted until 2026-01-23 18:18:05.407899582 +0000 UTC m=+1028.410357525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" (UID: "ed6619a3-ea05-44ae-880e-c9ba87fb93f9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:17:58 crc kubenswrapper[4760]: I0123 18:17:58.014990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:58 crc kubenswrapper[4760]: I0123 18:17:58.015037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:17:58 crc kubenswrapper[4760]: E0123 18:17:58.015150 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:17:58 crc kubenswrapper[4760]: E0123 18:17:58.015183 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:17:58 crc kubenswrapper[4760]: E0123 18:17:58.015210 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:18:06.01519482 +0000 UTC m=+1029.017652753 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "webhook-server-cert" not found Jan 23 18:17:58 crc kubenswrapper[4760]: E0123 18:17:58.015225 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:18:06.01521876 +0000 UTC m=+1029.017676693 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "metrics-server-cert" not found Jan 23 18:18:02 crc kubenswrapper[4760]: E0123 18:18:02.437028 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.166:5001/openstack-k8s-operators/manila-operator:e7f5b4381cc242616fdf1e1d0fb228be697456ca" Jan 23 18:18:02 crc kubenswrapper[4760]: E0123 18:18:02.438143 4760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.166:5001/openstack-k8s-operators/manila-operator:e7f5b4381cc242616fdf1e1d0fb228be697456ca" Jan 23 18:18:02 crc kubenswrapper[4760]: E0123 18:18:02.438306 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.166:5001/openstack-k8s-operators/manila-operator:e7f5b4381cc242616fdf1e1d0fb228be697456ca,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzk9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7758cc4469-bczdt_openstack-operators(8a1115aa-5fc1-4dc1-8752-7d15f984837b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:18:02 crc kubenswrapper[4760]: E0123 18:18:02.439534 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" podUID="8a1115aa-5fc1-4dc1-8752-7d15f984837b" Jan 23 18:18:03 crc kubenswrapper[4760]: E0123 18:18:03.365850 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.166:5001/openstack-k8s-operators/manila-operator:e7f5b4381cc242616fdf1e1d0fb228be697456ca\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" podUID="8a1115aa-5fc1-4dc1-8752-7d15f984837b" Jan 23 18:18:04 crc kubenswrapper[4760]: E0123 18:18:04.839356 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 18:18:04 crc kubenswrapper[4760]: E0123 18:18:04.839572 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pz824,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-qf78h_openstack-operators(4d84645c-b378-4acd-a3e5-638c61a3b709): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:18:04 crc kubenswrapper[4760]: E0123 18:18:04.840890 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" podUID="4d84645c-b378-4acd-a3e5-638c61a3b709" Jan 23 18:18:05 crc kubenswrapper[4760]: I0123 18:18:05.015836 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:18:05 crc kubenswrapper[4760]: I0123 18:18:05.024960 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f56403a2-dc6e-4362-99c2-669531fd3d8d-cert\") pod \"infra-operator-controller-manager-58749ffdfb-cxqww\" (UID: \"f56403a2-dc6e-4362-99c2-669531fd3d8d\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:18:05 crc kubenswrapper[4760]: I0123 18:18:05.133337 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-b9skj" Jan 23 18:18:05 crc kubenswrapper[4760]: I0123 18:18:05.141854 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:18:05 crc kubenswrapper[4760]: I0123 18:18:05.421175 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:18:05 crc kubenswrapper[4760]: I0123 18:18:05.427714 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6619a3-ea05-44ae-880e-c9ba87fb93f9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm\" (UID: \"ed6619a3-ea05-44ae-880e-c9ba87fb93f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:18:05 crc kubenswrapper[4760]: E0123 18:18:05.468975 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" podUID="4d84645c-b378-4acd-a3e5-638c61a3b709" Jan 23 18:18:05 crc kubenswrapper[4760]: I0123 18:18:05.680792 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jcf8j" Jan 23 18:18:05 crc kubenswrapper[4760]: I0123 18:18:05.689484 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:18:05 crc kubenswrapper[4760]: I0123 18:18:05.711382 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww"] Jan 23 18:18:06 crc kubenswrapper[4760]: I0123 18:18:06.031474 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:18:06 crc kubenswrapper[4760]: I0123 18:18:06.031717 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:18:06 crc kubenswrapper[4760]: E0123 18:18:06.031654 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:18:06 crc kubenswrapper[4760]: E0123 18:18:06.031850 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:18:22.031832605 +0000 UTC m=+1045.034290538 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "webhook-server-cert" not found Jan 23 18:18:06 crc kubenswrapper[4760]: E0123 18:18:06.031855 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:18:06 crc kubenswrapper[4760]: E0123 18:18:06.031901 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs podName:f5c3fafa-733d-4719-89f5-afd3c885919e nodeName:}" failed. No retries permitted until 2026-01-23 18:18:22.031887616 +0000 UTC m=+1045.034345549 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs") pod "openstack-operator-controller-manager-7555664f8b-7kpfz" (UID: "f5c3fafa-733d-4719-89f5-afd3c885919e") : secret "metrics-server-cert" not found Jan 23 18:18:06 crc kubenswrapper[4760]: I0123 18:18:06.158308 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm"] Jan 23 18:18:06 crc kubenswrapper[4760]: W0123 18:18:06.163945 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded6619a3_ea05_44ae_880e_c9ba87fb93f9.slice/crio-bdc78d189cdb468852b8caba0dc664e7d69f4fd4e51651c47ff722fb2c6aa00a WatchSource:0}: Error finding container bdc78d189cdb468852b8caba0dc664e7d69f4fd4e51651c47ff722fb2c6aa00a: Status 404 returned error can't find the container with id bdc78d189cdb468852b8caba0dc664e7d69f4fd4e51651c47ff722fb2c6aa00a Jan 23 18:18:06 crc kubenswrapper[4760]: I0123 18:18:06.393201 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcfbv" event={"ID":"141d7aff-3923-4d3a-84bf-6682f127c277","Type":"ContainerStarted","Data":"e2370ba1d44b1300e63d2a145a9caffef00a5a8eea222311bb02b2b9800a8ce8"} Jan 23 18:18:06 crc kubenswrapper[4760]: I0123 18:18:06.394955 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" event={"ID":"ed6619a3-ea05-44ae-880e-c9ba87fb93f9","Type":"ContainerStarted","Data":"bdc78d189cdb468852b8caba0dc664e7d69f4fd4e51651c47ff722fb2c6aa00a"} Jan 23 18:18:06 crc kubenswrapper[4760]: I0123 18:18:06.396216 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" event={"ID":"f56403a2-dc6e-4362-99c2-669531fd3d8d","Type":"ContainerStarted","Data":"c9773eabb1acac7b398a8cea0607cda5b54508a6942ee66c027f439167b99454"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.423021 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt" event={"ID":"fcc9617c-e7aa-4707-bcaf-1492e3e0fee6","Type":"ContainerStarted","Data":"928cd07ada0ac0d98c54850a4ecb20ea4d2214233e302ac5378505770286c899"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.424516 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.435084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4" event={"ID":"587a0d90-d644-4501-bc83-ef454dc4b3d9","Type":"ContainerStarted","Data":"45f5e075e7db05ccdaa716e8a02621b6988b7c253bb6deef626fff9023b1f90e"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.435713 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.438817 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54" event={"ID":"fdb3af86-9ecd-45de-8f76-976ff884b581","Type":"ContainerStarted","Data":"fdf08b1efc739e48aa60cfacef6437a0bc11563dd4d6464eaf6e554f03bc595f"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.439608 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.446468 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4" event={"ID":"0d449643-d693-4591-a0d6-42e8129a3468","Type":"ContainerStarted","Data":"31f21a9c472b43113be9f75b5dceed54c94d62555cf540064e986b263b5148f9"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.446529 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.449809 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b" event={"ID":"9d6049ab-b6ff-41e2-8e37-f3c2102d5ab0","Type":"ContainerStarted","Data":"5e6de70d4b0756c05f61339272c17d4b316ce4f0096e5092cf69cf9e511ce901"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.450074 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.455182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62" event={"ID":"b96abc36-760b-4dfb-bc01-80872c59c059","Type":"ContainerStarted","Data":"ebb9a66899f1d5e6d9159d5d2d0eee137f9f59faff7364217f0c330ce47fa92c"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.455390 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.459378 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr" event={"ID":"d0989ccd-5163-46a0-b578-975ba1c31f03","Type":"ContainerStarted","Data":"8a1f1ea94abacfa6bd95502a108e54b6a7b83938217184db3feb2f1ce8c4bf2b"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.459399 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt" podStartSLOduration=3.981244108 podStartE2EDuration="18.459378891s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.738241888 +0000 UTC m=+1013.740699831" lastFinishedPulling="2026-01-23 18:18:05.216376671 +0000 UTC m=+1028.218834614" observedRunningTime="2026-01-23 18:18:07.451983086 +0000 UTC m=+1030.454441019" watchObservedRunningTime="2026-01-23 18:18:07.459378891 +0000 UTC m=+1030.461836824" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.460121 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.481289 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b" podStartSLOduration=4.082732442 podStartE2EDuration="18.481269578s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:51.052346351 +0000 UTC m=+1014.054804284" lastFinishedPulling="2026-01-23 18:18:05.450883447 +0000 UTC m=+1028.453341420" observedRunningTime="2026-01-23 18:18:07.468012811 +0000 UTC m=+1030.470470744" watchObservedRunningTime="2026-01-23 18:18:07.481269578 +0000 UTC m=+1030.483727511" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.487910 4760 generic.go:334] "Generic (PLEG): container finished" podID="141d7aff-3923-4d3a-84bf-6682f127c277" containerID="e2370ba1d44b1300e63d2a145a9caffef00a5a8eea222311bb02b2b9800a8ce8" exitCode=0 Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.488671 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcfbv" event={"ID":"141d7aff-3923-4d3a-84bf-6682f127c277","Type":"ContainerDied","Data":"e2370ba1d44b1300e63d2a145a9caffef00a5a8eea222311bb02b2b9800a8ce8"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.505338 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66" event={"ID":"0dab320d-061f-43f2-9e57-1c94b958522a","Type":"ContainerStarted","Data":"a6814d86b1555d880517ce39a45a8631c5eca8261fb0f0ee8c7d74ae7422edc2"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.506138 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.510191 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54" podStartSLOduration=7.198849342 podStartE2EDuration="19.51018048s" podCreationTimestamp="2026-01-23 18:17:48 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.587131185 +0000 UTC m=+1013.589589118" lastFinishedPulling="2026-01-23 18:18:02.898462303 +0000 UTC m=+1025.900920256" observedRunningTime="2026-01-23 18:18:07.502743444 +0000 UTC m=+1030.505201377" watchObservedRunningTime="2026-01-23 18:18:07.51018048 +0000 UTC m=+1030.512638413" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.515643 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q" event={"ID":"3648750a-24fe-4391-8921-66d791485e98","Type":"ContainerStarted","Data":"87a526404bed2469334fc869223487ed089a39270e5d4f7bd1fcc6e87957d0cf"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.515912 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.520732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t" event={"ID":"ef671ec0-a50e-4acd-bd63-31aa36cf3033","Type":"ContainerStarted","Data":"2e9f149711d66f97b5f5e0834fea38cccc44fc189e36a86a2914e2e1921b428e"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.520864 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.536008 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4" podStartSLOduration=3.914131525 podStartE2EDuration="18.535994377s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.894057269 +0000 UTC m=+1013.896515202" lastFinishedPulling="2026-01-23 18:18:05.515920111 +0000 UTC m=+1028.518378054" observedRunningTime="2026-01-23 18:18:07.533724993 +0000 UTC m=+1030.536182926" watchObservedRunningTime="2026-01-23 18:18:07.535994377 +0000 UTC m=+1030.538452310" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.546607 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8" event={"ID":"58df2b6d-bc85-4266-bc2c-143cd52efc28","Type":"ContainerStarted","Data":"84fb97561a28e8a06b8fdc986d085f40d3a91ac8edc313f5639bda09c7997040"} Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.547233 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.562809 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4" podStartSLOduration=7.404288183 podStartE2EDuration="19.56279378s" podCreationTimestamp="2026-01-23 18:17:48 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.738243048 +0000 UTC m=+1013.740700981" lastFinishedPulling="2026-01-23 18:18:02.896748635 +0000 UTC m=+1025.899206578" observedRunningTime="2026-01-23 18:18:07.556461304 +0000 UTC m=+1030.558919237" watchObservedRunningTime="2026-01-23 18:18:07.56279378 +0000 UTC m=+1030.565251713" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.575319 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t" podStartSLOduration=7.251084911 podStartE2EDuration="19.575299857s" podCreationTimestamp="2026-01-23 18:17:48 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.574304498 +0000 UTC m=+1013.576762431" lastFinishedPulling="2026-01-23 18:18:02.898519444 +0000 UTC m=+1025.900977377" observedRunningTime="2026-01-23 18:18:07.571968395 +0000 UTC m=+1030.574426328" watchObservedRunningTime="2026-01-23 18:18:07.575299857 +0000 UTC m=+1030.577757790" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.638488 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62" podStartSLOduration=4.004116043 podStartE2EDuration="18.63846715s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.810810341 +0000 UTC m=+1013.813268274" lastFinishedPulling="2026-01-23 18:18:05.445161408 +0000 UTC m=+1028.447619381" observedRunningTime="2026-01-23 18:18:07.635081375 +0000 UTC m=+1030.637539318" watchObservedRunningTime="2026-01-23 18:18:07.63846715 +0000 UTC m=+1030.640925083" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.668552 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8" podStartSLOduration=6.490030881 podStartE2EDuration="18.668530353s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.720067753 +0000 UTC m=+1013.722525686" lastFinishedPulling="2026-01-23 18:18:02.898567195 +0000 UTC m=+1025.901025158" observedRunningTime="2026-01-23 18:18:07.653368223 +0000 UTC m=+1030.655826156" watchObservedRunningTime="2026-01-23 18:18:07.668530353 +0000 UTC m=+1030.670988286" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.702156 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr" podStartSLOduration=7.379486064 podStartE2EDuration="19.702134616s" podCreationTimestamp="2026-01-23 18:17:48 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.574101383 +0000 UTC m=+1013.576559316" lastFinishedPulling="2026-01-23 18:18:02.896749935 +0000 UTC m=+1025.899207868" observedRunningTime="2026-01-23 18:18:07.685957327 +0000 UTC m=+1030.688415260" watchObservedRunningTime="2026-01-23 18:18:07.702134616 +0000 UTC m=+1030.704592549" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.708286 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66" podStartSLOduration=4.259441745 podStartE2EDuration="18.708272906s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.768055274 +0000 UTC m=+1013.770513207" lastFinishedPulling="2026-01-23 18:18:05.216886375 +0000 UTC m=+1028.219344368" observedRunningTime="2026-01-23 18:18:07.708042169 +0000 UTC m=+1030.710500102" watchObservedRunningTime="2026-01-23 18:18:07.708272906 +0000 UTC m=+1030.710730839" Jan 23 18:18:07 crc kubenswrapper[4760]: I0123 18:18:07.724473 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q" podStartSLOduration=4.62094692 podStartE2EDuration="19.724454725s" podCreationTimestamp="2026-01-23 18:17:48 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.344557204 +0000 UTC m=+1013.347015137" lastFinishedPulling="2026-01-23 18:18:05.448064969 +0000 UTC m=+1028.450522942" observedRunningTime="2026-01-23 18:18:07.723149609 +0000 UTC m=+1030.725607542" watchObservedRunningTime="2026-01-23 18:18:07.724454725 +0000 UTC m=+1030.726912658" Jan 23 18:18:08 crc kubenswrapper[4760]: I0123 18:18:08.563086 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcfbv" event={"ID":"141d7aff-3923-4d3a-84bf-6682f127c277","Type":"ContainerStarted","Data":"629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece"} Jan 23 18:18:08 crc kubenswrapper[4760]: I0123 18:18:08.593884 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kcfbv" podStartSLOduration=3.741757729 podStartE2EDuration="20.593865317s" podCreationTimestamp="2026-01-23 18:17:48 +0000 UTC" firstStartedPulling="2026-01-23 18:17:51.283502324 +0000 UTC m=+1014.285960257" lastFinishedPulling="2026-01-23 18:18:08.135609902 +0000 UTC m=+1031.138067845" observedRunningTime="2026-01-23 18:18:08.590851712 +0000 UTC m=+1031.593309645" watchObservedRunningTime="2026-01-23 18:18:08.593865317 +0000 UTC m=+1031.596323250" Jan 23 18:18:08 crc kubenswrapper[4760]: I0123 18:18:08.765717 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:18:08 crc kubenswrapper[4760]: I0123 18:18:08.765781 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:18:09 crc kubenswrapper[4760]: I0123 18:18:09.814476 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kcfbv" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" containerName="registry-server" probeResult="failure" output=< Jan 23 18:18:09 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 23 18:18:09 crc kubenswrapper[4760]: > Jan 23 18:18:16 crc kubenswrapper[4760]: I0123 18:18:16.076083 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:18:16 crc kubenswrapper[4760]: I0123 18:18:16.076798 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:18:16 crc kubenswrapper[4760]: I0123 18:18:16.076898 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:18:16 crc kubenswrapper[4760]: I0123 18:18:16.077993 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd7531a7445766d1859395c87897c2fd5d7fec89de4fdbffda0e57724c6d100c"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:18:16 crc kubenswrapper[4760]: I0123 18:18:16.078092 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://fd7531a7445766d1859395c87897c2fd5d7fec89de4fdbffda0e57724c6d100c" gracePeriod=600 Jan 23 18:18:18 crc kubenswrapper[4760]: I0123 18:18:18.655519 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="fd7531a7445766d1859395c87897c2fd5d7fec89de4fdbffda0e57724c6d100c" exitCode=0 Jan 23 18:18:18 crc kubenswrapper[4760]: I0123 18:18:18.655568 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"fd7531a7445766d1859395c87897c2fd5d7fec89de4fdbffda0e57724c6d100c"} Jan 23 18:18:18 crc kubenswrapper[4760]: I0123 18:18:18.655607 4760 scope.go:117] "RemoveContainer" containerID="ba503457cf1516c95b31c578f53ac143902b2c5fe146afa02b8c1856b3d9d060" Jan 23 18:18:18 crc kubenswrapper[4760]: I0123 18:18:18.841887 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:18:18 crc kubenswrapper[4760]: I0123 18:18:18.910050 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.271367 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5g86q" Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.309401 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-cql54" Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.336782 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j9vqr" Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.349991 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pjz6t" Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.450378 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ljpl4" Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.555903 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8w8lt" Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.556648 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vvcd8" Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.651193 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcfbv"] Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.721030 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-jpt62" Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.918735 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jq9l4" Jan 23 18:18:19 crc kubenswrapper[4760]: I0123 18:18:19.944975 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-lzv66" Jan 23 18:18:20 crc kubenswrapper[4760]: I0123 18:18:20.387723 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7bc5b" Jan 23 18:18:20 crc kubenswrapper[4760]: I0123 18:18:20.667772 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kcfbv" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" containerName="registry-server" containerID="cri-o://629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece" gracePeriod=2 Jan 23 18:18:22 crc kubenswrapper[4760]: I0123 18:18:22.100103 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:18:22 crc kubenswrapper[4760]: I0123 18:18:22.100345 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:18:22 crc kubenswrapper[4760]: I0123 18:18:22.106146 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-metrics-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:18:22 crc kubenswrapper[4760]: I0123 18:18:22.106528 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5c3fafa-733d-4719-89f5-afd3c885919e-webhook-certs\") pod \"openstack-operator-controller-manager-7555664f8b-7kpfz\" (UID: \"f5c3fafa-733d-4719-89f5-afd3c885919e\") " pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:18:22 crc kubenswrapper[4760]: I0123 18:18:22.319229 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fqhsc" Jan 23 18:18:22 crc kubenswrapper[4760]: I0123 18:18:22.327738 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:18:28 crc kubenswrapper[4760]: I0123 18:18:28.739750 4760 generic.go:334] "Generic (PLEG): container finished" podID="141d7aff-3923-4d3a-84bf-6682f127c277" containerID="629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece" exitCode=0 Jan 23 18:18:28 crc kubenswrapper[4760]: I0123 18:18:28.739833 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcfbv" event={"ID":"141d7aff-3923-4d3a-84bf-6682f127c277","Type":"ContainerDied","Data":"629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece"} Jan 23 18:18:28 crc kubenswrapper[4760]: E0123 18:18:28.766222 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece is running failed: container process not found" containerID="629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 18:18:28 crc kubenswrapper[4760]: E0123 18:18:28.767135 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece is running failed: container process not found" containerID="629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 18:18:28 crc kubenswrapper[4760]: E0123 18:18:28.767685 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece is running failed: container process not found" containerID="629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 18:18:28 crc kubenswrapper[4760]: E0123 18:18:28.767797 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-kcfbv" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" containerName="registry-server" Jan 23 18:18:29 crc kubenswrapper[4760]: I0123 18:18:29.166822 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:18:29 crc kubenswrapper[4760]: E0123 18:18:29.642435 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 23 18:18:29 crc kubenswrapper[4760]: E0123 18:18:29.642934 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lfwzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-nxws7_openstack-operators(78a244f9-feb4-4df5-b5ec-7bb09185e655): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:18:29 crc kubenswrapper[4760]: E0123 18:18:29.644427 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" podUID="78a244f9-feb4-4df5-b5ec-7bb09185e655" Jan 23 18:18:30 crc kubenswrapper[4760]: E0123 18:18:30.148928 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:d9fa03b698a5daff784697c48ed3e061c216edbd64e18f3af6228ddb70147ea0" Jan 23 18:18:30 crc kubenswrapper[4760]: E0123 18:18:30.149098 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:d9fa03b698a5daff784697c48ed3e061c216edbd64e18f3af6228ddb70147ea0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsqsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-58749ffdfb-cxqww_openstack-operators(f56403a2-dc6e-4362-99c2-669531fd3d8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:18:30 crc kubenswrapper[4760]: E0123 18:18:30.150305 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" podUID="f56403a2-dc6e-4362-99c2-669531fd3d8d" Jan 23 18:18:30 crc kubenswrapper[4760]: E0123 18:18:30.616493 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 23 18:18:30 crc kubenswrapper[4760]: E0123 18:18:30.616678 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fm58l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-ngqsw_openstack-operators(89d52854-e7b7-4eba-b990-49a971674ab5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:18:30 crc kubenswrapper[4760]: E0123 18:18:30.618150 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" podUID="89d52854-e7b7-4eba-b990-49a971674ab5" Jan 23 18:18:30 crc kubenswrapper[4760]: E0123 18:18:30.760757 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:d9fa03b698a5daff784697c48ed3e061c216edbd64e18f3af6228ddb70147ea0\\\"\"" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" podUID="f56403a2-dc6e-4362-99c2-669531fd3d8d" Jan 23 18:18:31 crc kubenswrapper[4760]: E0123 18:18:31.464467 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22" Jan 23 18:18:31 crc kubenswrapper[4760]: E0123 18:18:31.467046 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhmz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm_openstack-operators(ed6619a3-ea05-44ae-880e-c9ba87fb93f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:18:31 crc kubenswrapper[4760]: E0123 18:18:31.469040 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" podUID="ed6619a3-ea05-44ae-880e-c9ba87fb93f9" Jan 23 18:18:31 crc kubenswrapper[4760]: E0123 18:18:31.778335 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" podUID="ed6619a3-ea05-44ae-880e-c9ba87fb93f9" Jan 23 18:18:34 crc kubenswrapper[4760]: E0123 18:18:34.152572 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 18:18:34 crc kubenswrapper[4760]: E0123 18:18:34.153153 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxgkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-j24jt_openstack-operators(ec704d93-0ca4-4d63-a123-dbb5a62bffed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:18:34 crc kubenswrapper[4760]: E0123 18:18:34.154427 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" podUID="ec704d93-0ca4-4d63-a123-dbb5a62bffed" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.432970 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.497234 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-catalog-content\") pod \"141d7aff-3923-4d3a-84bf-6682f127c277\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.497607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-utilities\") pod \"141d7aff-3923-4d3a-84bf-6682f127c277\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.497678 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnx2n\" (UniqueName: \"kubernetes.io/projected/141d7aff-3923-4d3a-84bf-6682f127c277-kube-api-access-dnx2n\") pod \"141d7aff-3923-4d3a-84bf-6682f127c277\" (UID: \"141d7aff-3923-4d3a-84bf-6682f127c277\") " Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.498887 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-utilities" (OuterVolumeSpecName: "utilities") pod "141d7aff-3923-4d3a-84bf-6682f127c277" (UID: "141d7aff-3923-4d3a-84bf-6682f127c277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.505001 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141d7aff-3923-4d3a-84bf-6682f127c277-kube-api-access-dnx2n" (OuterVolumeSpecName: "kube-api-access-dnx2n") pod "141d7aff-3923-4d3a-84bf-6682f127c277" (UID: "141d7aff-3923-4d3a-84bf-6682f127c277"). InnerVolumeSpecName "kube-api-access-dnx2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.586047 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "141d7aff-3923-4d3a-84bf-6682f127c277" (UID: "141d7aff-3923-4d3a-84bf-6682f127c277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.599506 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.599539 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/141d7aff-3923-4d3a-84bf-6682f127c277-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.599552 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnx2n\" (UniqueName: \"kubernetes.io/projected/141d7aff-3923-4d3a-84bf-6682f127c277-kube-api-access-dnx2n\") on node \"crc\" DevicePath \"\"" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.664680 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz"] Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.811729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" event={"ID":"807135e7-1ace-4928-be9b-82b8a58464fe","Type":"ContainerStarted","Data":"021ae297d48f0e7afa55d249e85a09bb81478c26bda1008402bab551c9d404a2"} Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.813025 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.828675 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" event={"ID":"bb6317fe-84f6-4921-9286-6b1aadd6d038","Type":"ContainerStarted","Data":"2071a20eaf1f5af5af226d4764c04be624d396b0842a82f84a131c28e00ff9e2"} Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.829352 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.836905 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" event={"ID":"4d84645c-b378-4acd-a3e5-638c61a3b709","Type":"ContainerStarted","Data":"e88a7b3b82ae3b9c899420d699cf3ce3e89628bd59c499de2bad4ff8a210fbc1"} Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.837048 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" podStartSLOduration=2.64241057 podStartE2EDuration="45.837032978s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:51.062546304 +0000 UTC m=+1014.065004237" lastFinishedPulling="2026-01-23 18:18:34.257168682 +0000 UTC m=+1057.259626645" observedRunningTime="2026-01-23 18:18:34.830457926 +0000 UTC m=+1057.832915859" watchObservedRunningTime="2026-01-23 18:18:34.837032978 +0000 UTC m=+1057.839490912" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.837719 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.852010 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"6f55d0f48ab5f20742a6157a2f638d64038b9a8ba0a7914e72dac7dd13e1a1c1"} Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.859800 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" podStartSLOduration=2.528133072 podStartE2EDuration="45.859784901s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.925046699 +0000 UTC m=+1013.927504632" lastFinishedPulling="2026-01-23 18:18:34.256698518 +0000 UTC m=+1057.259156461" observedRunningTime="2026-01-23 18:18:34.855308726 +0000 UTC m=+1057.857766659" watchObservedRunningTime="2026-01-23 18:18:34.859784901 +0000 UTC m=+1057.862242834" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.861561 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" event={"ID":"88c1fb15-33fa-40cf-afa9-068d281bbed5","Type":"ContainerStarted","Data":"e41970b82d987e373de4641eddd921390a4197c66e3a1ed47029d0d09b9501f4"} Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.861770 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.867289 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" event={"ID":"8a1115aa-5fc1-4dc1-8752-7d15f984837b","Type":"ContainerStarted","Data":"5b4c83bac5553447bfa970dbadd93aedb3965d7d454fcb88cb0931262cc70e5c"} Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.868060 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.870441 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcfbv" event={"ID":"141d7aff-3923-4d3a-84bf-6682f127c277","Type":"ContainerDied","Data":"120d40a3229ff584b8474ef3dfc565731bce6d03ff1ee984971f47a2ed4792ae"} Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.870483 4760 scope.go:117] "RemoveContainer" containerID="629efd00360f891a98eb9996abaa9563ea331eb16ab70fefb318619103de2ece" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.870583 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcfbv" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.885395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" event={"ID":"f5c3fafa-733d-4719-89f5-afd3c885919e","Type":"ContainerStarted","Data":"45e0a76fe326b479c4067cd92b1b79573468e98c517b13386a02f5600b24d42e"} Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.886028 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.901914 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" event={"ID":"285e41c1-c4f8-4978-9a78-ca8d88b45f29","Type":"ContainerStarted","Data":"e7b924bae5dc69902981a4a5fda41bf065ee841efacd75136152efe4dd5c8382"} Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.902671 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.923131 4760 scope.go:117] "RemoveContainer" containerID="e2370ba1d44b1300e63d2a145a9caffef00a5a8eea222311bb02b2b9800a8ce8" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.931091 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" podStartSLOduration=2.457291457 podStartE2EDuration="45.931070888s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.808157607 +0000 UTC m=+1013.810615540" lastFinishedPulling="2026-01-23 18:18:34.281937038 +0000 UTC m=+1057.284394971" observedRunningTime="2026-01-23 18:18:34.923527588 +0000 UTC m=+1057.925985511" watchObservedRunningTime="2026-01-23 18:18:34.931070888 +0000 UTC m=+1057.933528821" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.959592 4760 scope.go:117] "RemoveContainer" containerID="fd342a71df3647893287d17cf5ce63949b1b43765cea725a57b4f7131ec55ab1" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.973388 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" podStartSLOduration=45.973374432 podStartE2EDuration="45.973374432s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:18:34.968882477 +0000 UTC m=+1057.971340400" watchObservedRunningTime="2026-01-23 18:18:34.973374432 +0000 UTC m=+1057.975832355" Jan 23 18:18:34 crc kubenswrapper[4760]: I0123 18:18:34.995296 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" podStartSLOduration=2.846966768 podStartE2EDuration="45.995275289s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:51.052718022 +0000 UTC m=+1014.055175955" lastFinishedPulling="2026-01-23 18:18:34.201026523 +0000 UTC m=+1057.203484476" observedRunningTime="2026-01-23 18:18:34.995059373 +0000 UTC m=+1057.997517306" watchObservedRunningTime="2026-01-23 18:18:34.995275289 +0000 UTC m=+1057.997733222" Jan 23 18:18:35 crc kubenswrapper[4760]: I0123 18:18:35.045877 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" podStartSLOduration=2.774314622 podStartE2EDuration="46.045858273s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.929210795 +0000 UTC m=+1013.931668738" lastFinishedPulling="2026-01-23 18:18:34.200754456 +0000 UTC m=+1057.203212389" observedRunningTime="2026-01-23 18:18:35.026689971 +0000 UTC m=+1058.029147904" watchObservedRunningTime="2026-01-23 18:18:35.045858273 +0000 UTC m=+1058.048316206" Jan 23 18:18:35 crc kubenswrapper[4760]: I0123 18:18:35.067687 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcfbv"] Jan 23 18:18:35 crc kubenswrapper[4760]: I0123 18:18:35.072797 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kcfbv"] Jan 23 18:18:35 crc kubenswrapper[4760]: I0123 18:18:35.073334 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" podStartSLOduration=2.702037997 podStartE2EDuration="46.073323685s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.904654664 +0000 UTC m=+1013.907112597" lastFinishedPulling="2026-01-23 18:18:34.275940352 +0000 UTC m=+1057.278398285" observedRunningTime="2026-01-23 18:18:35.051805928 +0000 UTC m=+1058.054263861" watchObservedRunningTime="2026-01-23 18:18:35.073323685 +0000 UTC m=+1058.075781608" Jan 23 18:18:35 crc kubenswrapper[4760]: I0123 18:18:35.605428 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" path="/var/lib/kubelet/pods/141d7aff-3923-4d3a-84bf-6682f127c277/volumes" Jan 23 18:18:35 crc kubenswrapper[4760]: I0123 18:18:35.911846 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" event={"ID":"f5c3fafa-733d-4719-89f5-afd3c885919e","Type":"ContainerStarted","Data":"527bbb1bfb92a7ade354163c5a0eb5cd23fd8a54cf5e9c0d0e2c29338c60e7d4"} Jan 23 18:18:39 crc kubenswrapper[4760]: I0123 18:18:39.658556 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf" Jan 23 18:18:39 crc kubenswrapper[4760]: I0123 18:18:39.881751 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-qf78h" Jan 23 18:18:39 crc kubenswrapper[4760]: I0123 18:18:39.903218 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7758cc4469-bczdt" Jan 23 18:18:40 crc kubenswrapper[4760]: I0123 18:18:40.095016 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-tbdng" Jan 23 18:18:40 crc kubenswrapper[4760]: I0123 18:18:40.290667 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-kmclp" Jan 23 18:18:40 crc kubenswrapper[4760]: I0123 18:18:40.423949 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-r2pws" Jan 23 18:18:41 crc kubenswrapper[4760]: E0123 18:18:41.597695 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" podUID="78a244f9-feb4-4df5-b5ec-7bb09185e655" Jan 23 18:18:42 crc kubenswrapper[4760]: I0123 18:18:42.335691 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7555664f8b-7kpfz" Jan 23 18:18:43 crc kubenswrapper[4760]: E0123 18:18:43.598673 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" podUID="89d52854-e7b7-4eba-b990-49a971674ab5" Jan 23 18:18:46 crc kubenswrapper[4760]: I0123 18:18:46.001576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" event={"ID":"f56403a2-dc6e-4362-99c2-669531fd3d8d","Type":"ContainerStarted","Data":"4de8ccec7dca9bff8e9248c837ddd1238f5483134b7dbc7766e539ae107cca3f"} Jan 23 18:18:46 crc kubenswrapper[4760]: I0123 18:18:46.002645 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:18:46 crc kubenswrapper[4760]: I0123 18:18:46.032531 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" podStartSLOduration=17.648986948 podStartE2EDuration="57.032505937s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:18:05.748564286 +0000 UTC m=+1028.751022219" lastFinishedPulling="2026-01-23 18:18:45.132083275 +0000 UTC m=+1068.134541208" observedRunningTime="2026-01-23 18:18:46.02320565 +0000 UTC m=+1069.025663623" watchObservedRunningTime="2026-01-23 18:18:46.032505937 +0000 UTC m=+1069.034963900" Jan 23 18:18:46 crc kubenswrapper[4760]: E0123 18:18:46.597602 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" podUID="ec704d93-0ca4-4d63-a123-dbb5a62bffed" Jan 23 18:18:47 crc kubenswrapper[4760]: I0123 18:18:47.008644 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" event={"ID":"ed6619a3-ea05-44ae-880e-c9ba87fb93f9","Type":"ContainerStarted","Data":"57d5e1ca5b1128be2f849853e6e7f30576fdbfed9b8eaed8782dc919923e5588"} Jan 23 18:18:47 crc kubenswrapper[4760]: I0123 18:18:47.009145 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:18:47 crc kubenswrapper[4760]: I0123 18:18:47.044807 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" podStartSLOduration=17.945955336 podStartE2EDuration="58.044784432s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:18:06.167157949 +0000 UTC m=+1029.169615922" lastFinishedPulling="2026-01-23 18:18:46.265987085 +0000 UTC m=+1069.268445018" observedRunningTime="2026-01-23 18:18:47.042488208 +0000 UTC m=+1070.044946141" watchObservedRunningTime="2026-01-23 18:18:47.044784432 +0000 UTC m=+1070.047242375" Jan 23 18:18:55 crc kubenswrapper[4760]: I0123 18:18:55.149116 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-cxqww" Jan 23 18:18:55 crc kubenswrapper[4760]: I0123 18:18:55.697613 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm" Jan 23 18:18:57 crc kubenswrapper[4760]: I0123 18:18:57.083299 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" event={"ID":"78a244f9-feb4-4df5-b5ec-7bb09185e655","Type":"ContainerStarted","Data":"74ae90dc7a8c681a116176d89c307c20c59c4cf8e6fc106e795fb183d10bd362"} Jan 23 18:18:57 crc kubenswrapper[4760]: I0123 18:18:57.083859 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" Jan 23 18:18:57 crc kubenswrapper[4760]: I0123 18:18:57.108652 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" podStartSLOduration=2.994356736 podStartE2EDuration="1m8.108630605s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:51.053485763 +0000 UTC m=+1014.055943706" lastFinishedPulling="2026-01-23 18:18:56.167759642 +0000 UTC m=+1079.170217575" observedRunningTime="2026-01-23 18:18:57.102794224 +0000 UTC m=+1080.105252167" watchObservedRunningTime="2026-01-23 18:18:57.108630605 +0000 UTC m=+1080.111088548" Jan 23 18:19:00 crc kubenswrapper[4760]: I0123 18:19:00.107573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" event={"ID":"89d52854-e7b7-4eba-b990-49a971674ab5","Type":"ContainerStarted","Data":"f879738ce921b1b289916e3ed4a39c2c29172710937d9ef2c16c01e22d120347"} Jan 23 18:19:00 crc kubenswrapper[4760]: I0123 18:19:00.108160 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" Jan 23 18:19:00 crc kubenswrapper[4760]: I0123 18:19:00.131226 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" podStartSLOduration=2.876794026 podStartE2EDuration="1m11.131171413s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:50.909511269 +0000 UTC m=+1013.911969202" lastFinishedPulling="2026-01-23 18:18:59.163888626 +0000 UTC m=+1082.166346589" observedRunningTime="2026-01-23 18:19:00.122267005 +0000 UTC m=+1083.124724968" watchObservedRunningTime="2026-01-23 18:19:00.131171413 +0000 UTC m=+1083.133629356" Jan 23 18:19:03 crc kubenswrapper[4760]: I0123 18:19:03.134908 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" event={"ID":"ec704d93-0ca4-4d63-a123-dbb5a62bffed","Type":"ContainerStarted","Data":"70353b022186853428bfde1efcb24d2a124ec6505552b805ecc4d54bc7a67b20"} Jan 23 18:19:04 crc kubenswrapper[4760]: I0123 18:19:04.162907 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j24jt" podStartSLOduration=4.794331145 podStartE2EDuration="1m15.16288717s" podCreationTimestamp="2026-01-23 18:17:49 +0000 UTC" firstStartedPulling="2026-01-23 18:17:51.062140423 +0000 UTC m=+1014.064598356" lastFinishedPulling="2026-01-23 18:19:01.430696408 +0000 UTC m=+1084.433154381" observedRunningTime="2026-01-23 18:19:04.157713206 +0000 UTC m=+1087.160171169" watchObservedRunningTime="2026-01-23 18:19:04.16288717 +0000 UTC m=+1087.165345113" Jan 23 18:19:10 crc kubenswrapper[4760]: I0123 18:19:10.091788 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-ngqsw" Jan 23 18:19:10 crc kubenswrapper[4760]: I0123 18:19:10.335744 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-nxws7" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.883341 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mjfpr"] Jan 23 18:19:26 crc kubenswrapper[4760]: E0123 18:19:26.885240 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" containerName="extract-utilities" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.885257 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" containerName="extract-utilities" Jan 23 18:19:26 crc kubenswrapper[4760]: E0123 18:19:26.885281 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" containerName="extract-content" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.885287 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" containerName="extract-content" Jan 23 18:19:26 crc kubenswrapper[4760]: E0123 18:19:26.885294 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" containerName="registry-server" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.885303 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" containerName="registry-server" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.885463 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="141d7aff-3923-4d3a-84bf-6682f127c277" containerName="registry-server" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.886151 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.889016 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.890281 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.891815 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.891943 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-r5z72" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.893494 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mjfpr"] Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.950900 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9jg6d"] Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.952334 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.954693 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 18:19:26 crc kubenswrapper[4760]: I0123 18:19:26.960573 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9jg6d"] Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.038871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/054ad2ec-7313-481e-8388-61e9d2137cb8-config\") pod \"dnsmasq-dns-675f4bcbfc-mjfpr\" (UID: \"054ad2ec-7313-481e-8388-61e9d2137cb8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.039249 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44txh\" (UniqueName: \"kubernetes.io/projected/054ad2ec-7313-481e-8388-61e9d2137cb8-kube-api-access-44txh\") pod \"dnsmasq-dns-675f4bcbfc-mjfpr\" (UID: \"054ad2ec-7313-481e-8388-61e9d2137cb8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.140372 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44txh\" (UniqueName: \"kubernetes.io/projected/054ad2ec-7313-481e-8388-61e9d2137cb8-kube-api-access-44txh\") pod \"dnsmasq-dns-675f4bcbfc-mjfpr\" (UID: \"054ad2ec-7313-481e-8388-61e9d2137cb8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.140458 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-config\") pod \"dnsmasq-dns-78dd6ddcc-9jg6d\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.140492 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z685v\" (UniqueName: \"kubernetes.io/projected/51bda399-6b8d-4673-844d-072b3533dd98-kube-api-access-z685v\") pod \"dnsmasq-dns-78dd6ddcc-9jg6d\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.140534 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9jg6d\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.140580 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/054ad2ec-7313-481e-8388-61e9d2137cb8-config\") pod \"dnsmasq-dns-675f4bcbfc-mjfpr\" (UID: \"054ad2ec-7313-481e-8388-61e9d2137cb8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.141529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/054ad2ec-7313-481e-8388-61e9d2137cb8-config\") pod \"dnsmasq-dns-675f4bcbfc-mjfpr\" (UID: \"054ad2ec-7313-481e-8388-61e9d2137cb8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.159767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44txh\" (UniqueName: \"kubernetes.io/projected/054ad2ec-7313-481e-8388-61e9d2137cb8-kube-api-access-44txh\") pod \"dnsmasq-dns-675f4bcbfc-mjfpr\" (UID: \"054ad2ec-7313-481e-8388-61e9d2137cb8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.204053 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.241545 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-config\") pod \"dnsmasq-dns-78dd6ddcc-9jg6d\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.241595 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z685v\" (UniqueName: \"kubernetes.io/projected/51bda399-6b8d-4673-844d-072b3533dd98-kube-api-access-z685v\") pod \"dnsmasq-dns-78dd6ddcc-9jg6d\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.241627 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9jg6d\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.242496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9jg6d\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.244514 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-config\") pod \"dnsmasq-dns-78dd6ddcc-9jg6d\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.259791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z685v\" (UniqueName: \"kubernetes.io/projected/51bda399-6b8d-4673-844d-072b3533dd98-kube-api-access-z685v\") pod \"dnsmasq-dns-78dd6ddcc-9jg6d\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.276821 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.655883 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mjfpr"] Jan 23 18:19:27 crc kubenswrapper[4760]: W0123 18:19:27.681537 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod054ad2ec_7313_481e_8388_61e9d2137cb8.slice/crio-b2377228006b97fce17e680e17ec0d1c131ae06d78c2d2d0027edc369d2d475d WatchSource:0}: Error finding container b2377228006b97fce17e680e17ec0d1c131ae06d78c2d2d0027edc369d2d475d: Status 404 returned error can't find the container with id b2377228006b97fce17e680e17ec0d1c131ae06d78c2d2d0027edc369d2d475d Jan 23 18:19:27 crc kubenswrapper[4760]: I0123 18:19:27.735550 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9jg6d"] Jan 23 18:19:27 crc kubenswrapper[4760]: W0123 18:19:27.742946 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51bda399_6b8d_4673_844d_072b3533dd98.slice/crio-8340bc0b034aba708926cdc5b5f963e897ba0f081036a702bf517038f1c40728 WatchSource:0}: Error finding container 8340bc0b034aba708926cdc5b5f963e897ba0f081036a702bf517038f1c40728: Status 404 returned error can't find the container with id 8340bc0b034aba708926cdc5b5f963e897ba0f081036a702bf517038f1c40728 Jan 23 18:19:28 crc kubenswrapper[4760]: I0123 18:19:28.316744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" event={"ID":"054ad2ec-7313-481e-8388-61e9d2137cb8","Type":"ContainerStarted","Data":"b2377228006b97fce17e680e17ec0d1c131ae06d78c2d2d0027edc369d2d475d"} Jan 23 18:19:28 crc kubenswrapper[4760]: I0123 18:19:28.318793 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" event={"ID":"51bda399-6b8d-4673-844d-072b3533dd98","Type":"ContainerStarted","Data":"8340bc0b034aba708926cdc5b5f963e897ba0f081036a702bf517038f1c40728"} Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.765009 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mjfpr"] Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.793515 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xt5n"] Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.794718 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.799427 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xt5n"] Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.800112 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkcvt\" (UniqueName: \"kubernetes.io/projected/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-kube-api-access-dkcvt\") pod \"dnsmasq-dns-666b6646f7-8xt5n\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.800145 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-config\") pod \"dnsmasq-dns-666b6646f7-8xt5n\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.800195 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8xt5n\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.902614 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8xt5n\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.902763 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkcvt\" (UniqueName: \"kubernetes.io/projected/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-kube-api-access-dkcvt\") pod \"dnsmasq-dns-666b6646f7-8xt5n\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.902797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-config\") pod \"dnsmasq-dns-666b6646f7-8xt5n\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.903641 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8xt5n\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.903795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-config\") pod \"dnsmasq-dns-666b6646f7-8xt5n\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:29 crc kubenswrapper[4760]: I0123 18:19:29.921268 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkcvt\" (UniqueName: \"kubernetes.io/projected/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-kube-api-access-dkcvt\") pod \"dnsmasq-dns-666b6646f7-8xt5n\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.075652 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9jg6d"] Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.108965 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rtdnw"] Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.110528 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.114446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rtdnw\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.114697 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlsl5\" (UniqueName: \"kubernetes.io/projected/a3d40e30-e607-4d95-99ec-0d97b415eddb-kube-api-access-zlsl5\") pod \"dnsmasq-dns-57d769cc4f-rtdnw\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.114766 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-config\") pod \"dnsmasq-dns-57d769cc4f-rtdnw\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.119952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.127784 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rtdnw"] Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.215657 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rtdnw\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.215772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlsl5\" (UniqueName: \"kubernetes.io/projected/a3d40e30-e607-4d95-99ec-0d97b415eddb-kube-api-access-zlsl5\") pod \"dnsmasq-dns-57d769cc4f-rtdnw\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.215806 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-config\") pod \"dnsmasq-dns-57d769cc4f-rtdnw\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.217637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rtdnw\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.217886 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-config\") pod \"dnsmasq-dns-57d769cc4f-rtdnw\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.245247 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlsl5\" (UniqueName: \"kubernetes.io/projected/a3d40e30-e607-4d95-99ec-0d97b415eddb-kube-api-access-zlsl5\") pod \"dnsmasq-dns-57d769cc4f-rtdnw\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.449205 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.615827 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xt5n"] Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.935282 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.936856 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.940330 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kzn5r" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.940632 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.940676 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.940883 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.940918 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.940918 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.943274 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 18:19:30 crc kubenswrapper[4760]: I0123 18:19:30.971858 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.130971 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00d4b29c-f0c7-4d78-9db9-72e58e26360a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.131033 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.131150 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-config-data\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.131170 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.131189 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.131206 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00d4b29c-f0c7-4d78-9db9-72e58e26360a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.131233 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fj9w\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-kube-api-access-9fj9w\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.131249 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.131279 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.131296 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.131314 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.219484 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.221133 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.226783 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.229246 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.229323 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.229645 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.229796 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.229939 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.230097 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l4vzp" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.232224 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00d4b29c-f0c7-4d78-9db9-72e58e26360a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234147 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234306 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-config-data\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234330 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234348 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234370 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00d4b29c-f0c7-4d78-9db9-72e58e26360a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234431 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fj9w\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-kube-api-access-9fj9w\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234451 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234502 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234525 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234547 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.234866 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.235380 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.235649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.235901 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.236107 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-config-data\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.236418 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.242728 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.254292 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00d4b29c-f0c7-4d78-9db9-72e58e26360a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.258086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.262890 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00d4b29c-f0c7-4d78-9db9-72e58e26360a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.264226 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fj9w\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-kube-api-access-9fj9w\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.265239 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.277092 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.336366 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.336466 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.336710 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/916c4314-f639-42ce-9c84-48c7b1c4df05-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.336759 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.336893 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.336945 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/916c4314-f639-42ce-9c84-48c7b1c4df05-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.336988 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.337020 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.337055 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.337079 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqk92\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-kube-api-access-bqk92\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.337107 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.438685 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.438759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/916c4314-f639-42ce-9c84-48c7b1c4df05-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.438790 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.438819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.438863 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.438883 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqk92\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-kube-api-access-bqk92\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.438908 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.438950 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.438989 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.439019 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/916c4314-f639-42ce-9c84-48c7b1c4df05-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.439067 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.439474 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.439861 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.440593 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.440697 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.441024 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.442284 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.444485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.444554 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.445067 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/916c4314-f639-42ce-9c84-48c7b1c4df05-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.448237 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/916c4314-f639-42ce-9c84-48c7b1c4df05-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.456539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqk92\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-kube-api-access-bqk92\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.462212 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.567788 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:19:31 crc kubenswrapper[4760]: I0123 18:19:31.634451 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.430303 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.432433 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.437075 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.437201 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.437296 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.439190 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.440732 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7vtd8" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.458084 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.556473 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.556530 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.556550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.556667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.556692 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2mq2\" (UniqueName: \"kubernetes.io/projected/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-kube-api-access-w2mq2\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.556725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.556752 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.556770 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.660560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.660628 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.660655 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.660727 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.660761 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2mq2\" (UniqueName: \"kubernetes.io/projected/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-kube-api-access-w2mq2\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.660799 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.660834 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.660859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.661341 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.662096 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.662544 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.664266 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.670545 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.682640 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.686545 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.687178 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.689194 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2mq2\" (UniqueName: \"kubernetes.io/projected/eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad-kube-api-access-w2mq2\") pod \"openstack-galera-0\" (UID: \"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad\") " pod="openstack/openstack-galera-0" Jan 23 18:19:32 crc kubenswrapper[4760]: I0123 18:19:32.756329 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.757866 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.759691 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.761908 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.762170 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.762383 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-msccz" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.762598 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.772365 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.878697 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xvm\" (UniqueName: \"kubernetes.io/projected/c41fcdb0-57f0-4045-948f-16e9f075ae61-kube-api-access-m8xvm\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.878753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41fcdb0-57f0-4045-948f-16e9f075ae61-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.878839 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.878869 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c41fcdb0-57f0-4045-948f-16e9f075ae61-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.878931 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c41fcdb0-57f0-4045-948f-16e9f075ae61-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.879050 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c41fcdb0-57f0-4045-948f-16e9f075ae61-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.879101 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c41fcdb0-57f0-4045-948f-16e9f075ae61-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.879119 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41fcdb0-57f0-4045-948f-16e9f075ae61-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.980321 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8xvm\" (UniqueName: \"kubernetes.io/projected/c41fcdb0-57f0-4045-948f-16e9f075ae61-kube-api-access-m8xvm\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.980368 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41fcdb0-57f0-4045-948f-16e9f075ae61-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.980403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.980441 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c41fcdb0-57f0-4045-948f-16e9f075ae61-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.980489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c41fcdb0-57f0-4045-948f-16e9f075ae61-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.980534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c41fcdb0-57f0-4045-948f-16e9f075ae61-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.980567 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c41fcdb0-57f0-4045-948f-16e9f075ae61-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.980583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41fcdb0-57f0-4045-948f-16e9f075ae61-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.981341 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c41fcdb0-57f0-4045-948f-16e9f075ae61-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.981498 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c41fcdb0-57f0-4045-948f-16e9f075ae61-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.981696 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.982382 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c41fcdb0-57f0-4045-948f-16e9f075ae61-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.986113 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41fcdb0-57f0-4045-948f-16e9f075ae61-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.991924 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41fcdb0-57f0-4045-948f-16e9f075ae61-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:33 crc kubenswrapper[4760]: I0123 18:19:33.993130 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c41fcdb0-57f0-4045-948f-16e9f075ae61-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.005240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8xvm\" (UniqueName: \"kubernetes.io/projected/c41fcdb0-57f0-4045-948f-16e9f075ae61-kube-api-access-m8xvm\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.009521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c41fcdb0-57f0-4045-948f-16e9f075ae61\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.085578 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.129796 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.130793 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.132707 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-wbzkn" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.132958 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.133153 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.139171 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.284822 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db24b5a-b078-42ca-b3ef-4abf3cf33531-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.284876 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0db24b5a-b078-42ca-b3ef-4abf3cf33531-kolla-config\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.284919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0db24b5a-b078-42ca-b3ef-4abf3cf33531-config-data\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.285039 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db24b5a-b078-42ca-b3ef-4abf3cf33531-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.285187 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8jh5\" (UniqueName: \"kubernetes.io/projected/0db24b5a-b078-42ca-b3ef-4abf3cf33531-kube-api-access-s8jh5\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.380098 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" event={"ID":"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4","Type":"ContainerStarted","Data":"e47099959ba2d93084fe5b52d119046533b81f9c1c23c2f8e5e707f423c01c69"} Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.387137 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db24b5a-b078-42ca-b3ef-4abf3cf33531-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.387188 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0db24b5a-b078-42ca-b3ef-4abf3cf33531-kolla-config\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.387229 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0db24b5a-b078-42ca-b3ef-4abf3cf33531-config-data\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.388078 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0db24b5a-b078-42ca-b3ef-4abf3cf33531-config-data\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.388144 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0db24b5a-b078-42ca-b3ef-4abf3cf33531-kolla-config\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.395273 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db24b5a-b078-42ca-b3ef-4abf3cf33531-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.420522 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db24b5a-b078-42ca-b3ef-4abf3cf33531-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.420838 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8jh5\" (UniqueName: \"kubernetes.io/projected/0db24b5a-b078-42ca-b3ef-4abf3cf33531-kube-api-access-s8jh5\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.454109 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db24b5a-b078-42ca-b3ef-4abf3cf33531-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.455075 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8jh5\" (UniqueName: \"kubernetes.io/projected/0db24b5a-b078-42ca-b3ef-4abf3cf33531-kube-api-access-s8jh5\") pod \"memcached-0\" (UID: \"0db24b5a-b078-42ca-b3ef-4abf3cf33531\") " pod="openstack/memcached-0" Jan 23 18:19:34 crc kubenswrapper[4760]: I0123 18:19:34.754641 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 18:19:35 crc kubenswrapper[4760]: I0123 18:19:35.957107 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:19:35 crc kubenswrapper[4760]: I0123 18:19:35.958288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:19:35 crc kubenswrapper[4760]: I0123 18:19:35.960155 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tl2h8" Jan 23 18:19:35 crc kubenswrapper[4760]: I0123 18:19:35.977565 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:19:36 crc kubenswrapper[4760]: I0123 18:19:36.052349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqswj\" (UniqueName: \"kubernetes.io/projected/f98838a6-6ae7-4d8a-af63-4ad4cf693690-kube-api-access-nqswj\") pod \"kube-state-metrics-0\" (UID: \"f98838a6-6ae7-4d8a-af63-4ad4cf693690\") " pod="openstack/kube-state-metrics-0" Jan 23 18:19:36 crc kubenswrapper[4760]: I0123 18:19:36.154166 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqswj\" (UniqueName: \"kubernetes.io/projected/f98838a6-6ae7-4d8a-af63-4ad4cf693690-kube-api-access-nqswj\") pod \"kube-state-metrics-0\" (UID: \"f98838a6-6ae7-4d8a-af63-4ad4cf693690\") " pod="openstack/kube-state-metrics-0" Jan 23 18:19:36 crc kubenswrapper[4760]: I0123 18:19:36.170084 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqswj\" (UniqueName: \"kubernetes.io/projected/f98838a6-6ae7-4d8a-af63-4ad4cf693690-kube-api-access-nqswj\") pod \"kube-state-metrics-0\" (UID: \"f98838a6-6ae7-4d8a-af63-4ad4cf693690\") " pod="openstack/kube-state-metrics-0" Jan 23 18:19:36 crc kubenswrapper[4760]: I0123 18:19:36.277873 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:19:38 crc kubenswrapper[4760]: I0123 18:19:38.875086 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 18:19:39 crc kubenswrapper[4760]: I0123 18:19:39.925184 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2wpph"] Jan 23 18:19:39 crc kubenswrapper[4760]: I0123 18:19:39.926512 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2wpph" Jan 23 18:19:39 crc kubenswrapper[4760]: I0123 18:19:39.932122 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 18:19:39 crc kubenswrapper[4760]: I0123 18:19:39.932193 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nssqg" Jan 23 18:19:39 crc kubenswrapper[4760]: I0123 18:19:39.933107 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 18:19:39 crc kubenswrapper[4760]: I0123 18:19:39.950354 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2wpph"] Jan 23 18:19:39 crc kubenswrapper[4760]: I0123 18:19:39.975194 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7lf5j"] Jan 23 18:19:39 crc kubenswrapper[4760]: I0123 18:19:39.977498 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:39 crc kubenswrapper[4760]: I0123 18:19:39.988222 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7lf5j"] Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.014937 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.016278 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.019311 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.019857 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.019990 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.020636 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bkhk5" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.020931 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.048071 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.113502 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6k4h\" (UniqueName: \"kubernetes.io/projected/ea0533e4-88c1-4a03-93a9-f0680acaafc5-kube-api-access-p6k4h\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.113578 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-etc-ovs\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.113700 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea0533e4-88c1-4a03-93a9-f0680acaafc5-combined-ca-bundle\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.113741 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea0533e4-88c1-4a03-93a9-f0680acaafc5-scripts\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.113765 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea0533e4-88c1-4a03-93a9-f0680acaafc5-ovn-controller-tls-certs\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.113895 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.113980 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82ab7b9-010c-49aa-b6cc-a654dad56b87-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114006 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c82ab7b9-010c-49aa-b6cc-a654dad56b87-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114023 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pmdz\" (UniqueName: \"kubernetes.io/projected/c82ab7b9-010c-49aa-b6cc-a654dad56b87-kube-api-access-6pmdz\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114057 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea0533e4-88c1-4a03-93a9-f0680acaafc5-var-run\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114085 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-var-lib\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114111 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea0533e4-88c1-4a03-93a9-f0680acaafc5-var-log-ovn\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114163 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c82ab7b9-010c-49aa-b6cc-a654dad56b87-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114201 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82ab7b9-010c-49aa-b6cc-a654dad56b87-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114232 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-var-log\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114257 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-var-run\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114279 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea0533e4-88c1-4a03-93a9-f0680acaafc5-var-run-ovn\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114293 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eef24537-281d-489c-b15b-5610cfc62b32-scripts\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114323 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5kcx\" (UniqueName: \"kubernetes.io/projected/eef24537-281d-489c-b15b-5610cfc62b32-kube-api-access-s5kcx\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114342 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82ab7b9-010c-49aa-b6cc-a654dad56b87-config\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.114367 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82ab7b9-010c-49aa-b6cc-a654dad56b87-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215252 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82ab7b9-010c-49aa-b6cc-a654dad56b87-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215274 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c82ab7b9-010c-49aa-b6cc-a654dad56b87-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pmdz\" (UniqueName: \"kubernetes.io/projected/c82ab7b9-010c-49aa-b6cc-a654dad56b87-kube-api-access-6pmdz\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea0533e4-88c1-4a03-93a9-f0680acaafc5-var-run\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215326 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-var-lib\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215784 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea0533e4-88c1-4a03-93a9-f0680acaafc5-var-run\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215836 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea0533e4-88c1-4a03-93a9-f0680acaafc5-var-log-ovn\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215833 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea0533e4-88c1-4a03-93a9-f0680acaafc5-var-log-ovn\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215939 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.215963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-var-lib\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.216274 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c82ab7b9-010c-49aa-b6cc-a654dad56b87-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220379 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c82ab7b9-010c-49aa-b6cc-a654dad56b87-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220509 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82ab7b9-010c-49aa-b6cc-a654dad56b87-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220554 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-var-log\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220617 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-var-run\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220647 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea0533e4-88c1-4a03-93a9-f0680acaafc5-var-run-ovn\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220672 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eef24537-281d-489c-b15b-5610cfc62b32-scripts\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220716 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5kcx\" (UniqueName: \"kubernetes.io/projected/eef24537-281d-489c-b15b-5610cfc62b32-kube-api-access-s5kcx\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220738 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82ab7b9-010c-49aa-b6cc-a654dad56b87-config\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220774 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82ab7b9-010c-49aa-b6cc-a654dad56b87-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220826 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6k4h\" (UniqueName: \"kubernetes.io/projected/ea0533e4-88c1-4a03-93a9-f0680acaafc5-kube-api-access-p6k4h\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220908 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-etc-ovs\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.220961 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea0533e4-88c1-4a03-93a9-f0680acaafc5-combined-ca-bundle\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.221152 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea0533e4-88c1-4a03-93a9-f0680acaafc5-scripts\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.221189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea0533e4-88c1-4a03-93a9-f0680acaafc5-ovn-controller-tls-certs\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.221780 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea0533e4-88c1-4a03-93a9-f0680acaafc5-var-run-ovn\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.222217 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-var-log\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.222311 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-var-run\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.222333 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/eef24537-281d-489c-b15b-5610cfc62b32-etc-ovs\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.225876 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eef24537-281d-489c-b15b-5610cfc62b32-scripts\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.226062 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82ab7b9-010c-49aa-b6cc-a654dad56b87-config\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.226384 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea0533e4-88c1-4a03-93a9-f0680acaafc5-scripts\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.226954 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82ab7b9-010c-49aa-b6cc-a654dad56b87-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.226965 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82ab7b9-010c-49aa-b6cc-a654dad56b87-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.227363 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c82ab7b9-010c-49aa-b6cc-a654dad56b87-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.228268 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea0533e4-88c1-4a03-93a9-f0680acaafc5-ovn-controller-tls-certs\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.229334 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82ab7b9-010c-49aa-b6cc-a654dad56b87-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.233605 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea0533e4-88c1-4a03-93a9-f0680acaafc5-combined-ca-bundle\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.237067 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pmdz\" (UniqueName: \"kubernetes.io/projected/c82ab7b9-010c-49aa-b6cc-a654dad56b87-kube-api-access-6pmdz\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.241112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6k4h\" (UniqueName: \"kubernetes.io/projected/ea0533e4-88c1-4a03-93a9-f0680acaafc5-kube-api-access-p6k4h\") pod \"ovn-controller-2wpph\" (UID: \"ea0533e4-88c1-4a03-93a9-f0680acaafc5\") " pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.243564 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5kcx\" (UniqueName: \"kubernetes.io/projected/eef24537-281d-489c-b15b-5610cfc62b32-kube-api-access-s5kcx\") pod \"ovn-controller-ovs-7lf5j\" (UID: \"eef24537-281d-489c-b15b-5610cfc62b32\") " pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.248476 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c82ab7b9-010c-49aa-b6cc-a654dad56b87\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.260824 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2wpph" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.302203 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:40 crc kubenswrapper[4760]: I0123 18:19:40.347819 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.669597 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.672706 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.681624 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.681711 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pchl8" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.681856 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.682289 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.685182 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.768059 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7350c936-08ea-4b64-ae16-a0a7c3241c52-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.768118 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bjq5\" (UniqueName: \"kubernetes.io/projected/7350c936-08ea-4b64-ae16-a0a7c3241c52-kube-api-access-2bjq5\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.768191 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7350c936-08ea-4b64-ae16-a0a7c3241c52-config\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.768381 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.768467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7350c936-08ea-4b64-ae16-a0a7c3241c52-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.768515 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7350c936-08ea-4b64-ae16-a0a7c3241c52-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.768593 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7350c936-08ea-4b64-ae16-a0a7c3241c52-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.768622 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7350c936-08ea-4b64-ae16-a0a7c3241c52-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.870454 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.870518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7350c936-08ea-4b64-ae16-a0a7c3241c52-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.870573 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7350c936-08ea-4b64-ae16-a0a7c3241c52-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.870631 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7350c936-08ea-4b64-ae16-a0a7c3241c52-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.870658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7350c936-08ea-4b64-ae16-a0a7c3241c52-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.870725 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7350c936-08ea-4b64-ae16-a0a7c3241c52-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.870763 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bjq5\" (UniqueName: \"kubernetes.io/projected/7350c936-08ea-4b64-ae16-a0a7c3241c52-kube-api-access-2bjq5\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.870835 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7350c936-08ea-4b64-ae16-a0a7c3241c52-config\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.871011 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.871948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7350c936-08ea-4b64-ae16-a0a7c3241c52-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.872501 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7350c936-08ea-4b64-ae16-a0a7c3241c52-config\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.872851 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7350c936-08ea-4b64-ae16-a0a7c3241c52-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.876137 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7350c936-08ea-4b64-ae16-a0a7c3241c52-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.876349 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7350c936-08ea-4b64-ae16-a0a7c3241c52-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.877027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7350c936-08ea-4b64-ae16-a0a7c3241c52-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.888736 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bjq5\" (UniqueName: \"kubernetes.io/projected/7350c936-08ea-4b64-ae16-a0a7c3241c52-kube-api-access-2bjq5\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.904500 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7350c936-08ea-4b64-ae16-a0a7c3241c52\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:42 crc kubenswrapper[4760]: I0123 18:19:42.998900 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 18:19:45 crc kubenswrapper[4760]: W0123 18:19:45.049056 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb7df56d_a417_4f3e_8f53_8ec91ab9d4ad.slice/crio-4fccdcb0fb7a7866989ea5f5e2313258045441ba44107c6281c1c9b48d511b1c WatchSource:0}: Error finding container 4fccdcb0fb7a7866989ea5f5e2313258045441ba44107c6281c1c9b48d511b1c: Status 404 returned error can't find the container with id 4fccdcb0fb7a7866989ea5f5e2313258045441ba44107c6281c1c9b48d511b1c Jan 23 18:19:45 crc kubenswrapper[4760]: I0123 18:19:45.465358 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad","Type":"ContainerStarted","Data":"4fccdcb0fb7a7866989ea5f5e2313258045441ba44107c6281c1c9b48d511b1c"} Jan 23 18:19:45 crc kubenswrapper[4760]: I0123 18:19:45.465506 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:19:45 crc kubenswrapper[4760]: W0123 18:19:45.768245 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d4b29c_f0c7_4d78_9db9_72e58e26360a.slice/crio-07ab51da57f13d31f7fcf7a11f409fcc6717f121ffe9d7bab4997d6baf0a1463 WatchSource:0}: Error finding container 07ab51da57f13d31f7fcf7a11f409fcc6717f121ffe9d7bab4997d6baf0a1463: Status 404 returned error can't find the container with id 07ab51da57f13d31f7fcf7a11f409fcc6717f121ffe9d7bab4997d6baf0a1463 Jan 23 18:19:45 crc kubenswrapper[4760]: E0123 18:19:45.794365 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 18:19:45 crc kubenswrapper[4760]: E0123 18:19:45.794696 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44txh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-mjfpr_openstack(054ad2ec-7313-481e-8388-61e9d2137cb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:19:45 crc kubenswrapper[4760]: E0123 18:19:45.795899 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" podUID="054ad2ec-7313-481e-8388-61e9d2137cb8" Jan 23 18:19:45 crc kubenswrapper[4760]: E0123 18:19:45.818419 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 18:19:45 crc kubenswrapper[4760]: E0123 18:19:45.818606 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z685v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9jg6d_openstack(51bda399-6b8d-4673-844d-072b3533dd98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:19:45 crc kubenswrapper[4760]: E0123 18:19:45.819802 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" podUID="51bda399-6b8d-4673-844d-072b3533dd98" Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.185971 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:19:46 crc kubenswrapper[4760]: W0123 18:19:46.221314 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod916c4314_f639_42ce_9c84_48c7b1c4df05.slice/crio-233d8bc075dd3d137f05a5a93663dcaf4aae6bf5c3b7048f2164cb487fe3f6d5 WatchSource:0}: Error finding container 233d8bc075dd3d137f05a5a93663dcaf4aae6bf5c3b7048f2164cb487fe3f6d5: Status 404 returned error can't find the container with id 233d8bc075dd3d137f05a5a93663dcaf4aae6bf5c3b7048f2164cb487fe3f6d5 Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.332339 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rtdnw"] Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.480459 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00d4b29c-f0c7-4d78-9db9-72e58e26360a","Type":"ContainerStarted","Data":"07ab51da57f13d31f7fcf7a11f409fcc6717f121ffe9d7bab4997d6baf0a1463"} Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.482508 4760 generic.go:334] "Generic (PLEG): container finished" podID="3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" containerID="406c3850b33e9524617c2fc9cc596bf2dd411d83e0c87b1c39a2b1b3fe35179c" exitCode=0 Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.482591 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" event={"ID":"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4","Type":"ContainerDied","Data":"406c3850b33e9524617c2fc9cc596bf2dd411d83e0c87b1c39a2b1b3fe35179c"} Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.484815 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" event={"ID":"a3d40e30-e607-4d95-99ec-0d97b415eddb","Type":"ContainerStarted","Data":"cbdeefa1f16ddea6dc90e463acbcf8e619c5526d5faaaab21b90b68dabf6a48b"} Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.486727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"916c4314-f639-42ce-9c84-48c7b1c4df05","Type":"ContainerStarted","Data":"233d8bc075dd3d137f05a5a93663dcaf4aae6bf5c3b7048f2164cb487fe3f6d5"} Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.690507 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.723605 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2wpph"] Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.727348 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.743073 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.822245 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7lf5j"] Jan 23 18:19:46 crc kubenswrapper[4760]: W0123 18:19:46.868215 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef24537_281d_489c_b15b_5610cfc62b32.slice/crio-68f9a30d602cd1416b77669a2ed21027a09f0461614ba067499f373dbc4bc22a WatchSource:0}: Error finding container 68f9a30d602cd1416b77669a2ed21027a09f0461614ba067499f373dbc4bc22a: Status 404 returned error can't find the container with id 68f9a30d602cd1416b77669a2ed21027a09f0461614ba067499f373dbc4bc22a Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.903546 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.915650 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 18:19:46 crc kubenswrapper[4760]: W0123 18:19:46.927369 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc82ab7b9_010c_49aa_b6cc_a654dad56b87.slice/crio-184e881871e7245039b2c3a7582ed6ab18201e1714d7c8124afef4edaea20440 WatchSource:0}: Error finding container 184e881871e7245039b2c3a7582ed6ab18201e1714d7c8124afef4edaea20440: Status 404 returned error can't find the container with id 184e881871e7245039b2c3a7582ed6ab18201e1714d7c8124afef4edaea20440 Jan 23 18:19:46 crc kubenswrapper[4760]: I0123 18:19:46.939683 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.061225 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z685v\" (UniqueName: \"kubernetes.io/projected/51bda399-6b8d-4673-844d-072b3533dd98-kube-api-access-z685v\") pod \"51bda399-6b8d-4673-844d-072b3533dd98\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.061287 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/054ad2ec-7313-481e-8388-61e9d2137cb8-config\") pod \"054ad2ec-7313-481e-8388-61e9d2137cb8\" (UID: \"054ad2ec-7313-481e-8388-61e9d2137cb8\") " Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.061364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-dns-svc\") pod \"51bda399-6b8d-4673-844d-072b3533dd98\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.061438 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-config\") pod \"51bda399-6b8d-4673-844d-072b3533dd98\" (UID: \"51bda399-6b8d-4673-844d-072b3533dd98\") " Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.061481 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44txh\" (UniqueName: \"kubernetes.io/projected/054ad2ec-7313-481e-8388-61e9d2137cb8-kube-api-access-44txh\") pod \"054ad2ec-7313-481e-8388-61e9d2137cb8\" (UID: \"054ad2ec-7313-481e-8388-61e9d2137cb8\") " Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.062365 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51bda399-6b8d-4673-844d-072b3533dd98" (UID: "51bda399-6b8d-4673-844d-072b3533dd98"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.062377 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-config" (OuterVolumeSpecName: "config") pod "51bda399-6b8d-4673-844d-072b3533dd98" (UID: "51bda399-6b8d-4673-844d-072b3533dd98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.064212 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/054ad2ec-7313-481e-8388-61e9d2137cb8-config" (OuterVolumeSpecName: "config") pod "054ad2ec-7313-481e-8388-61e9d2137cb8" (UID: "054ad2ec-7313-481e-8388-61e9d2137cb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.067789 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054ad2ec-7313-481e-8388-61e9d2137cb8-kube-api-access-44txh" (OuterVolumeSpecName: "kube-api-access-44txh") pod "054ad2ec-7313-481e-8388-61e9d2137cb8" (UID: "054ad2ec-7313-481e-8388-61e9d2137cb8"). InnerVolumeSpecName "kube-api-access-44txh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.070522 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bda399-6b8d-4673-844d-072b3533dd98-kube-api-access-z685v" (OuterVolumeSpecName: "kube-api-access-z685v") pod "51bda399-6b8d-4673-844d-072b3533dd98" (UID: "51bda399-6b8d-4673-844d-072b3533dd98"). InnerVolumeSpecName "kube-api-access-z685v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.167990 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z685v\" (UniqueName: \"kubernetes.io/projected/51bda399-6b8d-4673-844d-072b3533dd98-kube-api-access-z685v\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.168026 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/054ad2ec-7313-481e-8388-61e9d2137cb8-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.168036 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.168046 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bda399-6b8d-4673-844d-072b3533dd98-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.168057 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44txh\" (UniqueName: \"kubernetes.io/projected/054ad2ec-7313-481e-8388-61e9d2137cb8-kube-api-access-44txh\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.494762 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c82ab7b9-010c-49aa-b6cc-a654dad56b87","Type":"ContainerStarted","Data":"184e881871e7245039b2c3a7582ed6ab18201e1714d7c8124afef4edaea20440"} Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.496233 4760 generic.go:334] "Generic (PLEG): container finished" podID="a3d40e30-e607-4d95-99ec-0d97b415eddb" containerID="671ba0b2068c2743cb4a4eeec0b209079be7117acfeeb719de30e9bd31b8accc" exitCode=0 Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.496276 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" event={"ID":"a3d40e30-e607-4d95-99ec-0d97b415eddb","Type":"ContainerDied","Data":"671ba0b2068c2743cb4a4eeec0b209079be7117acfeeb719de30e9bd31b8accc"} Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.497392 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.497442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mjfpr" event={"ID":"054ad2ec-7313-481e-8388-61e9d2137cb8","Type":"ContainerDied","Data":"b2377228006b97fce17e680e17ec0d1c131ae06d78c2d2d0027edc369d2d475d"} Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.499958 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" event={"ID":"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4","Type":"ContainerStarted","Data":"366ba4136e57104734305ef7fc3d6c660823e611a0a331cd19b24b2870b1b9e5"} Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.500282 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.501582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0db24b5a-b078-42ca-b3ef-4abf3cf33531","Type":"ContainerStarted","Data":"155b832a47cd1d3735246387a0ffa70a2a411f342e8fd043a607a2e436e709a0"} Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.503717 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7lf5j" event={"ID":"eef24537-281d-489c-b15b-5610cfc62b32","Type":"ContainerStarted","Data":"68f9a30d602cd1416b77669a2ed21027a09f0461614ba067499f373dbc4bc22a"} Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.504454 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2wpph" event={"ID":"ea0533e4-88c1-4a03-93a9-f0680acaafc5","Type":"ContainerStarted","Data":"320ea2a9420e5852a071d10c9dbd97cfa3d4c030e8cd8f28230a8c44f223b3ad"} Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.505202 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c41fcdb0-57f0-4045-948f-16e9f075ae61","Type":"ContainerStarted","Data":"b257f8e5e2c1aaba03d0acfab37dde9750dd716b01c71d9c4711e31ed984b2d2"} Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.506022 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" event={"ID":"51bda399-6b8d-4673-844d-072b3533dd98","Type":"ContainerDied","Data":"8340bc0b034aba708926cdc5b5f963e897ba0f081036a702bf517038f1c40728"} Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.506082 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9jg6d" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.519359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f98838a6-6ae7-4d8a-af63-4ad4cf693690","Type":"ContainerStarted","Data":"2c59fade446856d8f6d28613bcedc6237019c726fb95f70f7fb1a6d641798c88"} Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.538238 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" podStartSLOduration=6.643059972 podStartE2EDuration="18.538216182s" podCreationTimestamp="2026-01-23 18:19:29 +0000 UTC" firstStartedPulling="2026-01-23 18:19:34.046775444 +0000 UTC m=+1117.049233377" lastFinishedPulling="2026-01-23 18:19:45.941931654 +0000 UTC m=+1128.944389587" observedRunningTime="2026-01-23 18:19:47.530775547 +0000 UTC m=+1130.533233480" watchObservedRunningTime="2026-01-23 18:19:47.538216182 +0000 UTC m=+1130.540674115" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.563737 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9jg6d"] Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.568745 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9jg6d"] Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.611465 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51bda399-6b8d-4673-844d-072b3533dd98" path="/var/lib/kubelet/pods/51bda399-6b8d-4673-844d-072b3533dd98/volumes" Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.611790 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mjfpr"] Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.611810 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mjfpr"] Jan 23 18:19:47 crc kubenswrapper[4760]: I0123 18:19:47.712610 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 18:19:49 crc kubenswrapper[4760]: I0123 18:19:49.538025 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7350c936-08ea-4b64-ae16-a0a7c3241c52","Type":"ContainerStarted","Data":"e59e825753942325dda9c9922a4db185af48e4ab84785a34e33c8e44b9044abe"} Jan 23 18:19:49 crc kubenswrapper[4760]: I0123 18:19:49.607977 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="054ad2ec-7313-481e-8388-61e9d2137cb8" path="/var/lib/kubelet/pods/054ad2ec-7313-481e-8388-61e9d2137cb8/volumes" Jan 23 18:19:53 crc kubenswrapper[4760]: I0123 18:19:53.578576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad","Type":"ContainerStarted","Data":"96c4b17852bf1b377f6b746ea3e343e4d9581aac709763a8be236841f107cc1c"} Jan 23 18:19:53 crc kubenswrapper[4760]: I0123 18:19:53.581435 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c41fcdb0-57f0-4045-948f-16e9f075ae61","Type":"ContainerStarted","Data":"be61756f59962013dc3bb1e6c860b2055e6f5e61889c8e4433dd622e6a1aac7f"} Jan 23 18:19:53 crc kubenswrapper[4760]: I0123 18:19:53.584749 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" event={"ID":"a3d40e30-e607-4d95-99ec-0d97b415eddb","Type":"ContainerStarted","Data":"f6019de0b773b0a73648966d7e011d1a0b86c259c3c50954ad31ce8f9614c132"} Jan 23 18:19:53 crc kubenswrapper[4760]: I0123 18:19:53.585234 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:19:53 crc kubenswrapper[4760]: I0123 18:19:53.628734 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" podStartSLOduration=23.628714197 podStartE2EDuration="23.628714197s" podCreationTimestamp="2026-01-23 18:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:19:53.621088876 +0000 UTC m=+1136.623546809" watchObservedRunningTime="2026-01-23 18:19:53.628714197 +0000 UTC m=+1136.631172130" Jan 23 18:19:54 crc kubenswrapper[4760]: I0123 18:19:54.591600 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2wpph" event={"ID":"ea0533e4-88c1-4a03-93a9-f0680acaafc5","Type":"ContainerStarted","Data":"1e331d6c28cdc6d7448fee39bb7d0b495cbd115d95952b10ecc3f78b551c8a2b"} Jan 23 18:19:54 crc kubenswrapper[4760]: I0123 18:19:54.591887 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-2wpph" Jan 23 18:19:54 crc kubenswrapper[4760]: I0123 18:19:54.593541 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0db24b5a-b078-42ca-b3ef-4abf3cf33531","Type":"ContainerStarted","Data":"93e8c99de1cbe001a831cae7f40fe1294ed1bc102b19281d3191956a7400177a"} Jan 23 18:19:54 crc kubenswrapper[4760]: I0123 18:19:54.640974 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.491161966 podStartE2EDuration="20.640953416s" podCreationTimestamp="2026-01-23 18:19:34 +0000 UTC" firstStartedPulling="2026-01-23 18:19:46.776124743 +0000 UTC m=+1129.778582676" lastFinishedPulling="2026-01-23 18:19:52.925916193 +0000 UTC m=+1135.928374126" observedRunningTime="2026-01-23 18:19:54.636157293 +0000 UTC m=+1137.638615226" watchObservedRunningTime="2026-01-23 18:19:54.640953416 +0000 UTC m=+1137.643411349" Jan 23 18:19:54 crc kubenswrapper[4760]: I0123 18:19:54.641587 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2wpph" podStartSLOduration=9.475276345 podStartE2EDuration="15.641581213s" podCreationTimestamp="2026-01-23 18:19:39 +0000 UTC" firstStartedPulling="2026-01-23 18:19:46.757965709 +0000 UTC m=+1129.760423642" lastFinishedPulling="2026-01-23 18:19:52.924270577 +0000 UTC m=+1135.926728510" observedRunningTime="2026-01-23 18:19:54.617119444 +0000 UTC m=+1137.619577377" watchObservedRunningTime="2026-01-23 18:19:54.641581213 +0000 UTC m=+1137.644039146" Jan 23 18:19:54 crc kubenswrapper[4760]: I0123 18:19:54.755192 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 18:19:55 crc kubenswrapper[4760]: I0123 18:19:55.121694 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:19:55 crc kubenswrapper[4760]: I0123 18:19:55.603794 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7350c936-08ea-4b64-ae16-a0a7c3241c52","Type":"ContainerStarted","Data":"f61060e4f7ac678275b4813c1301b35abd1dc09cdf327a45b464a0c61300dee6"} Jan 23 18:19:55 crc kubenswrapper[4760]: I0123 18:19:55.606882 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00d4b29c-f0c7-4d78-9db9-72e58e26360a","Type":"ContainerStarted","Data":"f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90"} Jan 23 18:19:55 crc kubenswrapper[4760]: I0123 18:19:55.608987 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f98838a6-6ae7-4d8a-af63-4ad4cf693690","Type":"ContainerStarted","Data":"77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3"} Jan 23 18:19:55 crc kubenswrapper[4760]: I0123 18:19:55.609395 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 18:19:55 crc kubenswrapper[4760]: I0123 18:19:55.611026 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c82ab7b9-010c-49aa-b6cc-a654dad56b87","Type":"ContainerStarted","Data":"26ae3138cc4c96a34920b2f96541d8c5ae3979c1d956083243a51f03641cadaf"} Jan 23 18:19:55 crc kubenswrapper[4760]: I0123 18:19:55.617852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"916c4314-f639-42ce-9c84-48c7b1c4df05","Type":"ContainerStarted","Data":"a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947"} Jan 23 18:19:55 crc kubenswrapper[4760]: I0123 18:19:55.620943 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7lf5j" event={"ID":"eef24537-281d-489c-b15b-5610cfc62b32","Type":"ContainerDied","Data":"d5c69c4d0de0e4398d2f22450ed1c20b31b61ad3e13fd50db83fa96a41834e05"} Jan 23 18:19:55 crc kubenswrapper[4760]: I0123 18:19:55.621208 4760 generic.go:334] "Generic (PLEG): container finished" podID="eef24537-281d-489c-b15b-5610cfc62b32" containerID="d5c69c4d0de0e4398d2f22450ed1c20b31b61ad3e13fd50db83fa96a41834e05" exitCode=0 Jan 23 18:19:55 crc kubenswrapper[4760]: I0123 18:19:55.704572 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.971323746 podStartE2EDuration="20.704550609s" podCreationTimestamp="2026-01-23 18:19:35 +0000 UTC" firstStartedPulling="2026-01-23 18:19:46.745798461 +0000 UTC m=+1129.748256394" lastFinishedPulling="2026-01-23 18:19:54.479025324 +0000 UTC m=+1137.481483257" observedRunningTime="2026-01-23 18:19:55.702612886 +0000 UTC m=+1138.705070819" watchObservedRunningTime="2026-01-23 18:19:55.704550609 +0000 UTC m=+1138.707008542" Jan 23 18:19:56 crc kubenswrapper[4760]: I0123 18:19:56.635226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7lf5j" event={"ID":"eef24537-281d-489c-b15b-5610cfc62b32","Type":"ContainerStarted","Data":"4d843a9daf573c6100e6b7c232641a7f57e1f36977c1aade3711695bd0c3a524"} Jan 23 18:19:56 crc kubenswrapper[4760]: I0123 18:19:56.635588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7lf5j" event={"ID":"eef24537-281d-489c-b15b-5610cfc62b32","Type":"ContainerStarted","Data":"92f5685bc87718ccc0cdb075d2dda1f84b1f26fdd9559ecc8875809da0c45fd5"} Jan 23 18:19:56 crc kubenswrapper[4760]: I0123 18:19:56.655782 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7lf5j" podStartSLOduration=11.540585136 podStartE2EDuration="17.655760115s" podCreationTimestamp="2026-01-23 18:19:39 +0000 UTC" firstStartedPulling="2026-01-23 18:19:46.878308568 +0000 UTC m=+1129.880766501" lastFinishedPulling="2026-01-23 18:19:52.993483547 +0000 UTC m=+1135.995941480" observedRunningTime="2026-01-23 18:19:56.653290806 +0000 UTC m=+1139.655748759" watchObservedRunningTime="2026-01-23 18:19:56.655760115 +0000 UTC m=+1139.658218048" Jan 23 18:19:57 crc kubenswrapper[4760]: I0123 18:19:57.647614 4760 generic.go:334] "Generic (PLEG): container finished" podID="c41fcdb0-57f0-4045-948f-16e9f075ae61" containerID="be61756f59962013dc3bb1e6c860b2055e6f5e61889c8e4433dd622e6a1aac7f" exitCode=0 Jan 23 18:19:57 crc kubenswrapper[4760]: I0123 18:19:57.647648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c41fcdb0-57f0-4045-948f-16e9f075ae61","Type":"ContainerDied","Data":"be61756f59962013dc3bb1e6c860b2055e6f5e61889c8e4433dd622e6a1aac7f"} Jan 23 18:19:57 crc kubenswrapper[4760]: I0123 18:19:57.655824 4760 generic.go:334] "Generic (PLEG): container finished" podID="eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad" containerID="96c4b17852bf1b377f6b746ea3e343e4d9581aac709763a8be236841f107cc1c" exitCode=0 Jan 23 18:19:57 crc kubenswrapper[4760]: I0123 18:19:57.655946 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad","Type":"ContainerDied","Data":"96c4b17852bf1b377f6b746ea3e343e4d9581aac709763a8be236841f107cc1c"} Jan 23 18:19:57 crc kubenswrapper[4760]: I0123 18:19:57.657728 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:57 crc kubenswrapper[4760]: I0123 18:19:57.657777 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:19:58 crc kubenswrapper[4760]: I0123 18:19:58.669306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad","Type":"ContainerStarted","Data":"05fa235955dded7544a691543dba6ff3b4ae766eb980a8feabdaacb033f9eb39"} Jan 23 18:19:58 crc kubenswrapper[4760]: I0123 18:19:58.671852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c41fcdb0-57f0-4045-948f-16e9f075ae61","Type":"ContainerStarted","Data":"0f08431326bd4f4c24d3830821b17543551f6e3726c8f71ab3ff8fcdc0382ad3"} Jan 23 18:19:58 crc kubenswrapper[4760]: I0123 18:19:58.676453 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7350c936-08ea-4b64-ae16-a0a7c3241c52","Type":"ContainerStarted","Data":"ceffef9f3a21209b2dcf1e1f9fc84a11858f8c87eb9c5789cc7cbda57c21dab2"} Jan 23 18:19:58 crc kubenswrapper[4760]: I0123 18:19:58.679914 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c82ab7b9-010c-49aa-b6cc-a654dad56b87","Type":"ContainerStarted","Data":"7772233454aa8bb90fcfe362061a4032369e551fb9ce42d2910245e00c9fb70b"} Jan 23 18:19:58 crc kubenswrapper[4760]: I0123 18:19:58.699696 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.826484946 podStartE2EDuration="27.699671121s" podCreationTimestamp="2026-01-23 18:19:31 +0000 UTC" firstStartedPulling="2026-01-23 18:19:45.051078842 +0000 UTC m=+1128.053536775" lastFinishedPulling="2026-01-23 18:19:52.924265007 +0000 UTC m=+1135.926722950" observedRunningTime="2026-01-23 18:19:58.694915509 +0000 UTC m=+1141.697373492" watchObservedRunningTime="2026-01-23 18:19:58.699671121 +0000 UTC m=+1141.702129094" Jan 23 18:19:58 crc kubenswrapper[4760]: I0123 18:19:58.733213 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.250551493 podStartE2EDuration="20.733184031s" podCreationTimestamp="2026-01-23 18:19:38 +0000 UTC" firstStartedPulling="2026-01-23 18:19:46.930467794 +0000 UTC m=+1129.932925727" lastFinishedPulling="2026-01-23 18:19:57.413100332 +0000 UTC m=+1140.415558265" observedRunningTime="2026-01-23 18:19:58.719383868 +0000 UTC m=+1141.721841821" watchObservedRunningTime="2026-01-23 18:19:58.733184031 +0000 UTC m=+1141.735642004" Jan 23 18:19:58 crc kubenswrapper[4760]: I0123 18:19:58.741797 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.985642091999999 podStartE2EDuration="17.741771449s" podCreationTimestamp="2026-01-23 18:19:41 +0000 UTC" firstStartedPulling="2026-01-23 18:19:48.678420251 +0000 UTC m=+1131.680878184" lastFinishedPulling="2026-01-23 18:19:57.434549618 +0000 UTC m=+1140.437007541" observedRunningTime="2026-01-23 18:19:58.735062652 +0000 UTC m=+1141.737520595" watchObservedRunningTime="2026-01-23 18:19:58.741771449 +0000 UTC m=+1141.744229422" Jan 23 18:19:59 crc kubenswrapper[4760]: I0123 18:19:59.756174 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 18:19:59 crc kubenswrapper[4760]: I0123 18:19:59.772047 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.737637199 podStartE2EDuration="27.772022587s" podCreationTimestamp="2026-01-23 18:19:32 +0000 UTC" firstStartedPulling="2026-01-23 18:19:46.744387093 +0000 UTC m=+1129.746845026" lastFinishedPulling="2026-01-23 18:19:52.778772481 +0000 UTC m=+1135.781230414" observedRunningTime="2026-01-23 18:19:58.770454775 +0000 UTC m=+1141.772912718" watchObservedRunningTime="2026-01-23 18:19:59.772022587 +0000 UTC m=+1142.774480520" Jan 23 18:20:00 crc kubenswrapper[4760]: I0123 18:20:00.348570 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 18:20:00 crc kubenswrapper[4760]: I0123 18:20:00.451319 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:20:00 crc kubenswrapper[4760]: I0123 18:20:00.496355 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xt5n"] Jan 23 18:20:00 crc kubenswrapper[4760]: I0123 18:20:00.496592 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" podUID="3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" containerName="dnsmasq-dns" containerID="cri-o://366ba4136e57104734305ef7fc3d6c660823e611a0a331cd19b24b2870b1b9e5" gracePeriod=10 Jan 23 18:20:00 crc kubenswrapper[4760]: I0123 18:20:00.695909 4760 generic.go:334] "Generic (PLEG): container finished" podID="3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" containerID="366ba4136e57104734305ef7fc3d6c660823e611a0a331cd19b24b2870b1b9e5" exitCode=0 Jan 23 18:20:00 crc kubenswrapper[4760]: I0123 18:20:00.696073 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" event={"ID":"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4","Type":"ContainerDied","Data":"366ba4136e57104734305ef7fc3d6c660823e611a0a331cd19b24b2870b1b9e5"} Jan 23 18:20:00 crc kubenswrapper[4760]: I0123 18:20:00.920286 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.000004 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.020966 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-config\") pod \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.021046 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-dns-svc\") pod \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.021096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkcvt\" (UniqueName: \"kubernetes.io/projected/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-kube-api-access-dkcvt\") pod \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\" (UID: \"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4\") " Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.026023 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-kube-api-access-dkcvt" (OuterVolumeSpecName: "kube-api-access-dkcvt") pod "3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" (UID: "3e0e5dfa-e0c6-45cb-847c-d7308ae330f4"). InnerVolumeSpecName "kube-api-access-dkcvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.034964 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.054872 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-config" (OuterVolumeSpecName: "config") pod "3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" (UID: "3e0e5dfa-e0c6-45cb-847c-d7308ae330f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.059026 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" (UID: "3e0e5dfa-e0c6-45cb-847c-d7308ae330f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.123278 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.123319 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.123329 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkcvt\" (UniqueName: \"kubernetes.io/projected/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4-kube-api-access-dkcvt\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.349054 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.382484 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.707881 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" event={"ID":"3e0e5dfa-e0c6-45cb-847c-d7308ae330f4","Type":"ContainerDied","Data":"e47099959ba2d93084fe5b52d119046533b81f9c1c23c2f8e5e707f423c01c69"} Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.707975 4760 scope.go:117] "RemoveContainer" containerID="366ba4136e57104734305ef7fc3d6c660823e611a0a331cd19b24b2870b1b9e5" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.708125 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.708171 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8xt5n" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.740859 4760 scope.go:117] "RemoveContainer" containerID="406c3850b33e9524617c2fc9cc596bf2dd411d83e0c87b1c39a2b1b3fe35179c" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.756755 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xt5n"] Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.765832 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xt5n"] Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.771669 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 18:20:01 crc kubenswrapper[4760]: I0123 18:20:01.771773 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.008258 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dc25n"] Jan 23 18:20:02 crc kubenswrapper[4760]: E0123 18:20:02.008686 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" containerName="init" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.008706 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" containerName="init" Jan 23 18:20:02 crc kubenswrapper[4760]: E0123 18:20:02.008728 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" containerName="dnsmasq-dns" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.008736 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" containerName="dnsmasq-dns" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.008928 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" containerName="dnsmasq-dns" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.009564 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.014728 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.042837 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-ovn-rundir\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.042934 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-config\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.042975 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l74bt\" (UniqueName: \"kubernetes.io/projected/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-kube-api-access-l74bt\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.043011 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-ovs-rundir\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.043069 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-combined-ca-bundle\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.043116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.056969 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dc25n"] Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.074347 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-j8t57"] Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.075935 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.085227 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.129721 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-j8t57"] Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.148342 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-ovs-rundir\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.148441 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xnn\" (UniqueName: \"kubernetes.io/projected/b512cfcc-0d35-4434-8556-c618d374bdbf-kube-api-access-49xnn\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.148483 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-combined-ca-bundle\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.148510 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-config\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.148557 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.148588 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.148637 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.148665 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-ovn-rundir\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.148720 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-config\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.148744 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l74bt\" (UniqueName: \"kubernetes.io/projected/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-kube-api-access-l74bt\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.149390 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-ovs-rundir\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.153605 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-ovn-rundir\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.154445 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-config\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.155115 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.166130 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-combined-ca-bundle\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.184214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l74bt\" (UniqueName: \"kubernetes.io/projected/b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212-kube-api-access-l74bt\") pod \"ovn-controller-metrics-dc25n\" (UID: \"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212\") " pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.250016 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.250135 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xnn\" (UniqueName: \"kubernetes.io/projected/b512cfcc-0d35-4434-8556-c618d374bdbf-kube-api-access-49xnn\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.250167 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-config\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.250227 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.251110 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.251111 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.251954 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-config\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.273282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xnn\" (UniqueName: \"kubernetes.io/projected/b512cfcc-0d35-4434-8556-c618d374bdbf-kube-api-access-49xnn\") pod \"dnsmasq-dns-7fd796d7df-j8t57\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.273370 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.274956 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.279138 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8z7qb" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.279429 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.279629 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.282084 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.287251 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-j8t57"] Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.287956 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.302240 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.331395 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dc25n" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.351226 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519052b1-de37-42f3-8811-9252e225ad9b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.351277 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxl2\" (UniqueName: \"kubernetes.io/projected/519052b1-de37-42f3-8811-9252e225ad9b-kube-api-access-6dxl2\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.351338 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/519052b1-de37-42f3-8811-9252e225ad9b-scripts\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.351362 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/519052b1-de37-42f3-8811-9252e225ad9b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.351417 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519052b1-de37-42f3-8811-9252e225ad9b-config\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.351433 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/519052b1-de37-42f3-8811-9252e225ad9b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.351462 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/519052b1-de37-42f3-8811-9252e225ad9b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.375923 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dt5ls"] Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.377900 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.379316 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dt5ls"] Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.387081 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.460579 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/519052b1-de37-42f3-8811-9252e225ad9b-scripts\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.460676 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/519052b1-de37-42f3-8811-9252e225ad9b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.460737 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.460760 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-config\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.460794 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519052b1-de37-42f3-8811-9252e225ad9b-config\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.460811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/519052b1-de37-42f3-8811-9252e225ad9b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.460841 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.460896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/519052b1-de37-42f3-8811-9252e225ad9b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.460992 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629jw\" (UniqueName: \"kubernetes.io/projected/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-kube-api-access-629jw\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.461075 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.461112 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519052b1-de37-42f3-8811-9252e225ad9b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.461144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxl2\" (UniqueName: \"kubernetes.io/projected/519052b1-de37-42f3-8811-9252e225ad9b-kube-api-access-6dxl2\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.462720 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/519052b1-de37-42f3-8811-9252e225ad9b-scripts\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.464039 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/519052b1-de37-42f3-8811-9252e225ad9b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.466191 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519052b1-de37-42f3-8811-9252e225ad9b-config\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.469594 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/519052b1-de37-42f3-8811-9252e225ad9b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.469839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519052b1-de37-42f3-8811-9252e225ad9b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.470223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/519052b1-de37-42f3-8811-9252e225ad9b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.480345 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxl2\" (UniqueName: \"kubernetes.io/projected/519052b1-de37-42f3-8811-9252e225ad9b-kube-api-access-6dxl2\") pod \"ovn-northd-0\" (UID: \"519052b1-de37-42f3-8811-9252e225ad9b\") " pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.562040 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.562236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-config\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.562270 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.562327 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-629jw\" (UniqueName: \"kubernetes.io/projected/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-kube-api-access-629jw\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.562369 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.563003 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.563116 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.563589 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.564182 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-config\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.596688 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-629jw\" (UniqueName: \"kubernetes.io/projected/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-kube-api-access-629jw\") pod \"dnsmasq-dns-86db49b7ff-dt5ls\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.707743 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.712576 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.756753 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.756799 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.801728 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-j8t57"] Jan 23 18:20:02 crc kubenswrapper[4760]: W0123 18:20:02.809401 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb512cfcc_0d35_4434_8556_c618d374bdbf.slice/crio-268004b1ae01abee525f0de4a4a6b22c4b70ddd5d8637d8da4f36769bf0d8b96 WatchSource:0}: Error finding container 268004b1ae01abee525f0de4a4a6b22c4b70ddd5d8637d8da4f36769bf0d8b96: Status 404 returned error can't find the container with id 268004b1ae01abee525f0de4a4a6b22c4b70ddd5d8637d8da4f36769bf0d8b96 Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.901275 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 18:20:02 crc kubenswrapper[4760]: I0123 18:20:02.937662 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dc25n"] Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.197134 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dt5ls"] Jan 23 18:20:03 crc kubenswrapper[4760]: W0123 18:20:03.199635 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dd44778_15c8_48cd_86ee_29cf85d7fc7a.slice/crio-cd7fb0c95b95ec866b943119f6972456fd03cc3cceb59db759ac43a39a93330c WatchSource:0}: Error finding container cd7fb0c95b95ec866b943119f6972456fd03cc3cceb59db759ac43a39a93330c: Status 404 returned error can't find the container with id cd7fb0c95b95ec866b943119f6972456fd03cc3cceb59db759ac43a39a93330c Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.261781 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 18:20:03 crc kubenswrapper[4760]: W0123 18:20:03.266016 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod519052b1_de37_42f3_8811_9252e225ad9b.slice/crio-5e7bd08e4277911ae020924ebf97f96abc9c0699b71c4198f0d8057a30b3e0a5 WatchSource:0}: Error finding container 5e7bd08e4277911ae020924ebf97f96abc9c0699b71c4198f0d8057a30b3e0a5: Status 404 returned error can't find the container with id 5e7bd08e4277911ae020924ebf97f96abc9c0699b71c4198f0d8057a30b3e0a5 Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.606752 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0e5dfa-e0c6-45cb-847c-d7308ae330f4" path="/var/lib/kubelet/pods/3e0e5dfa-e0c6-45cb-847c-d7308ae330f4/volumes" Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.726095 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dc25n" event={"ID":"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212","Type":"ContainerStarted","Data":"acbb1bde53c2cb8c7deb41c0ee1c79ea3c6edfa41837059952869929c5d3517e"} Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.726182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dc25n" event={"ID":"b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212","Type":"ContainerStarted","Data":"0bcbd7bdafef7bcbaae95d8e450ff1de7e9c3548a3194c3b237966a24cad307a"} Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.728858 4760 generic.go:334] "Generic (PLEG): container finished" podID="b512cfcc-0d35-4434-8556-c618d374bdbf" containerID="cef9c4d332e7fa9139018df576bf91b7c187611c0533b6c407e13bba9c7d1e2b" exitCode=0 Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.728918 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" event={"ID":"b512cfcc-0d35-4434-8556-c618d374bdbf","Type":"ContainerDied","Data":"cef9c4d332e7fa9139018df576bf91b7c187611c0533b6c407e13bba9c7d1e2b"} Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.728940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" event={"ID":"b512cfcc-0d35-4434-8556-c618d374bdbf","Type":"ContainerStarted","Data":"268004b1ae01abee525f0de4a4a6b22c4b70ddd5d8637d8da4f36769bf0d8b96"} Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.730763 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"519052b1-de37-42f3-8811-9252e225ad9b","Type":"ContainerStarted","Data":"5e7bd08e4277911ae020924ebf97f96abc9c0699b71c4198f0d8057a30b3e0a5"} Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.732381 4760 generic.go:334] "Generic (PLEG): container finished" podID="2dd44778-15c8-48cd-86ee-29cf85d7fc7a" containerID="16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32" exitCode=0 Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.732436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" event={"ID":"2dd44778-15c8-48cd-86ee-29cf85d7fc7a","Type":"ContainerDied","Data":"16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32"} Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.732728 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" event={"ID":"2dd44778-15c8-48cd-86ee-29cf85d7fc7a","Type":"ContainerStarted","Data":"cd7fb0c95b95ec866b943119f6972456fd03cc3cceb59db759ac43a39a93330c"} Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.765485 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dc25n" podStartSLOduration=2.765465062 podStartE2EDuration="2.765465062s" podCreationTimestamp="2026-01-23 18:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:03.749339114 +0000 UTC m=+1146.751797047" watchObservedRunningTime="2026-01-23 18:20:03.765465062 +0000 UTC m=+1146.767923005" Jan 23 18:20:03 crc kubenswrapper[4760]: I0123 18:20:03.892996 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.086322 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.086364 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.238826 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4388-account-create-update-lfh6r"] Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.239869 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4388-account-create-update-lfh6r" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.243444 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.248862 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4388-account-create-update-lfh6r"] Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.268106 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.301478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb4d716-63d1-46ae-8134-3ba82ec34339-operator-scripts\") pod \"keystone-4388-account-create-update-lfh6r\" (UID: \"9cb4d716-63d1-46ae-8134-3ba82ec34339\") " pod="openstack/keystone-4388-account-create-update-lfh6r" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.301566 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlhbn\" (UniqueName: \"kubernetes.io/projected/9cb4d716-63d1-46ae-8134-3ba82ec34339-kube-api-access-dlhbn\") pod \"keystone-4388-account-create-update-lfh6r\" (UID: \"9cb4d716-63d1-46ae-8134-3ba82ec34339\") " pod="openstack/keystone-4388-account-create-update-lfh6r" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.402425 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-ovsdbserver-nb\") pod \"b512cfcc-0d35-4434-8556-c618d374bdbf\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.402513 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-config\") pod \"b512cfcc-0d35-4434-8556-c618d374bdbf\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.402537 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-dns-svc\") pod \"b512cfcc-0d35-4434-8556-c618d374bdbf\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.402649 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49xnn\" (UniqueName: \"kubernetes.io/projected/b512cfcc-0d35-4434-8556-c618d374bdbf-kube-api-access-49xnn\") pod \"b512cfcc-0d35-4434-8556-c618d374bdbf\" (UID: \"b512cfcc-0d35-4434-8556-c618d374bdbf\") " Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.402998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlhbn\" (UniqueName: \"kubernetes.io/projected/9cb4d716-63d1-46ae-8134-3ba82ec34339-kube-api-access-dlhbn\") pod \"keystone-4388-account-create-update-lfh6r\" (UID: \"9cb4d716-63d1-46ae-8134-3ba82ec34339\") " pod="openstack/keystone-4388-account-create-update-lfh6r" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.403132 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb4d716-63d1-46ae-8134-3ba82ec34339-operator-scripts\") pod \"keystone-4388-account-create-update-lfh6r\" (UID: \"9cb4d716-63d1-46ae-8134-3ba82ec34339\") " pod="openstack/keystone-4388-account-create-update-lfh6r" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.404086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb4d716-63d1-46ae-8134-3ba82ec34339-operator-scripts\") pod \"keystone-4388-account-create-update-lfh6r\" (UID: \"9cb4d716-63d1-46ae-8134-3ba82ec34339\") " pod="openstack/keystone-4388-account-create-update-lfh6r" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.408964 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2lmmb"] Jan 23 18:20:04 crc kubenswrapper[4760]: E0123 18:20:04.409374 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b512cfcc-0d35-4434-8556-c618d374bdbf" containerName="init" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.409395 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b512cfcc-0d35-4434-8556-c618d374bdbf" containerName="init" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.409626 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b512cfcc-0d35-4434-8556-c618d374bdbf" containerName="init" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.410119 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b512cfcc-0d35-4434-8556-c618d374bdbf-kube-api-access-49xnn" (OuterVolumeSpecName: "kube-api-access-49xnn") pod "b512cfcc-0d35-4434-8556-c618d374bdbf" (UID: "b512cfcc-0d35-4434-8556-c618d374bdbf"). InnerVolumeSpecName "kube-api-access-49xnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.410237 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2lmmb" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.426344 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2lmmb"] Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.433702 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b512cfcc-0d35-4434-8556-c618d374bdbf" (UID: "b512cfcc-0d35-4434-8556-c618d374bdbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.434266 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlhbn\" (UniqueName: \"kubernetes.io/projected/9cb4d716-63d1-46ae-8134-3ba82ec34339-kube-api-access-dlhbn\") pod \"keystone-4388-account-create-update-lfh6r\" (UID: \"9cb4d716-63d1-46ae-8134-3ba82ec34339\") " pod="openstack/keystone-4388-account-create-update-lfh6r" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.437439 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c9ab-account-create-update-6hjgj"] Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.438559 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c9ab-account-create-update-6hjgj" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.439767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-config" (OuterVolumeSpecName: "config") pod "b512cfcc-0d35-4434-8556-c618d374bdbf" (UID: "b512cfcc-0d35-4434-8556-c618d374bdbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.440619 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.445074 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c9ab-account-create-update-6hjgj"] Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.446152 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b512cfcc-0d35-4434-8556-c618d374bdbf" (UID: "b512cfcc-0d35-4434-8556-c618d374bdbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.504981 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbk66\" (UniqueName: \"kubernetes.io/projected/1f41ec84-d27d-4f53-b556-347cd55c7fd4-kube-api-access-nbk66\") pod \"placement-c9ab-account-create-update-6hjgj\" (UID: \"1f41ec84-d27d-4f53-b556-347cd55c7fd4\") " pod="openstack/placement-c9ab-account-create-update-6hjgj" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.505159 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkv5f\" (UniqueName: \"kubernetes.io/projected/0f4e759f-76d3-44b0-ad03-134407e85cd5-kube-api-access-zkv5f\") pod \"placement-db-create-2lmmb\" (UID: \"0f4e759f-76d3-44b0-ad03-134407e85cd5\") " pod="openstack/placement-db-create-2lmmb" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.505207 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f41ec84-d27d-4f53-b556-347cd55c7fd4-operator-scripts\") pod \"placement-c9ab-account-create-update-6hjgj\" (UID: \"1f41ec84-d27d-4f53-b556-347cd55c7fd4\") " pod="openstack/placement-c9ab-account-create-update-6hjgj" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.505248 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f4e759f-76d3-44b0-ad03-134407e85cd5-operator-scripts\") pod \"placement-db-create-2lmmb\" (UID: \"0f4e759f-76d3-44b0-ad03-134407e85cd5\") " pod="openstack/placement-db-create-2lmmb" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.505331 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49xnn\" (UniqueName: \"kubernetes.io/projected/b512cfcc-0d35-4434-8556-c618d374bdbf-kube-api-access-49xnn\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.505345 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.505358 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.505370 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b512cfcc-0d35-4434-8556-c618d374bdbf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.593969 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4388-account-create-update-lfh6r" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.607352 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbk66\" (UniqueName: \"kubernetes.io/projected/1f41ec84-d27d-4f53-b556-347cd55c7fd4-kube-api-access-nbk66\") pod \"placement-c9ab-account-create-update-6hjgj\" (UID: \"1f41ec84-d27d-4f53-b556-347cd55c7fd4\") " pod="openstack/placement-c9ab-account-create-update-6hjgj" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.607464 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkv5f\" (UniqueName: \"kubernetes.io/projected/0f4e759f-76d3-44b0-ad03-134407e85cd5-kube-api-access-zkv5f\") pod \"placement-db-create-2lmmb\" (UID: \"0f4e759f-76d3-44b0-ad03-134407e85cd5\") " pod="openstack/placement-db-create-2lmmb" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.607513 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f41ec84-d27d-4f53-b556-347cd55c7fd4-operator-scripts\") pod \"placement-c9ab-account-create-update-6hjgj\" (UID: \"1f41ec84-d27d-4f53-b556-347cd55c7fd4\") " pod="openstack/placement-c9ab-account-create-update-6hjgj" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.607556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f4e759f-76d3-44b0-ad03-134407e85cd5-operator-scripts\") pod \"placement-db-create-2lmmb\" (UID: \"0f4e759f-76d3-44b0-ad03-134407e85cd5\") " pod="openstack/placement-db-create-2lmmb" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.609075 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f41ec84-d27d-4f53-b556-347cd55c7fd4-operator-scripts\") pod \"placement-c9ab-account-create-update-6hjgj\" (UID: \"1f41ec84-d27d-4f53-b556-347cd55c7fd4\") " pod="openstack/placement-c9ab-account-create-update-6hjgj" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.610448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f4e759f-76d3-44b0-ad03-134407e85cd5-operator-scripts\") pod \"placement-db-create-2lmmb\" (UID: \"0f4e759f-76d3-44b0-ad03-134407e85cd5\") " pod="openstack/placement-db-create-2lmmb" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.624986 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkv5f\" (UniqueName: \"kubernetes.io/projected/0f4e759f-76d3-44b0-ad03-134407e85cd5-kube-api-access-zkv5f\") pod \"placement-db-create-2lmmb\" (UID: \"0f4e759f-76d3-44b0-ad03-134407e85cd5\") " pod="openstack/placement-db-create-2lmmb" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.627380 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbk66\" (UniqueName: \"kubernetes.io/projected/1f41ec84-d27d-4f53-b556-347cd55c7fd4-kube-api-access-nbk66\") pod \"placement-c9ab-account-create-update-6hjgj\" (UID: \"1f41ec84-d27d-4f53-b556-347cd55c7fd4\") " pod="openstack/placement-c9ab-account-create-update-6hjgj" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.742519 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" event={"ID":"2dd44778-15c8-48cd-86ee-29cf85d7fc7a","Type":"ContainerStarted","Data":"57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57"} Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.742976 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.744652 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" event={"ID":"b512cfcc-0d35-4434-8556-c618d374bdbf","Type":"ContainerDied","Data":"268004b1ae01abee525f0de4a4a6b22c4b70ddd5d8637d8da4f36769bf0d8b96"} Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.744756 4760 scope.go:117] "RemoveContainer" containerID="cef9c4d332e7fa9139018df576bf91b7c187611c0533b6c407e13bba9c7d1e2b" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.744866 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-j8t57" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.761730 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2lmmb" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.762326 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" podStartSLOduration=2.762307473 podStartE2EDuration="2.762307473s" podCreationTimestamp="2026-01-23 18:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:04.762296663 +0000 UTC m=+1147.764754596" watchObservedRunningTime="2026-01-23 18:20:04.762307473 +0000 UTC m=+1147.764765406" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.768088 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c9ab-account-create-update-6hjgj" Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.827245 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-j8t57"] Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.837929 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-j8t57"] Jan 23 18:20:04 crc kubenswrapper[4760]: I0123 18:20:04.997343 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.109242 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.385744 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2lmmb"] Jan 23 18:20:05 crc kubenswrapper[4760]: W0123 18:20:05.388004 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f4e759f_76d3_44b0_ad03_134407e85cd5.slice/crio-aa758c25373b362ce4fe7bd59e72c4c81e193bf54ec2799ccb45dfdbee07cb65 WatchSource:0}: Error finding container aa758c25373b362ce4fe7bd59e72c4c81e193bf54ec2799ccb45dfdbee07cb65: Status 404 returned error can't find the container with id aa758c25373b362ce4fe7bd59e72c4c81e193bf54ec2799ccb45dfdbee07cb65 Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.451233 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4388-account-create-update-lfh6r"] Jan 23 18:20:05 crc kubenswrapper[4760]: W0123 18:20:05.452270 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cb4d716_63d1_46ae_8134_3ba82ec34339.slice/crio-e822df1d4795b0535cc0ed56d56e61aaca55b1add49bbad817d0b9ca38162398 WatchSource:0}: Error finding container e822df1d4795b0535cc0ed56d56e61aaca55b1add49bbad817d0b9ca38162398: Status 404 returned error can't find the container with id e822df1d4795b0535cc0ed56d56e61aaca55b1add49bbad817d0b9ca38162398 Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.457689 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c9ab-account-create-update-6hjgj"] Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.605581 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b512cfcc-0d35-4434-8556-c618d374bdbf" path="/var/lib/kubelet/pods/b512cfcc-0d35-4434-8556-c618d374bdbf/volumes" Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.751719 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4388-account-create-update-lfh6r" event={"ID":"9cb4d716-63d1-46ae-8134-3ba82ec34339","Type":"ContainerStarted","Data":"5189b22122c2e262c8ec4d58eacded9e1fcd049fdade6c8d8e5e8b736ce4036e"} Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.751771 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4388-account-create-update-lfh6r" event={"ID":"9cb4d716-63d1-46ae-8134-3ba82ec34339","Type":"ContainerStarted","Data":"e822df1d4795b0535cc0ed56d56e61aaca55b1add49bbad817d0b9ca38162398"} Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.753989 4760 generic.go:334] "Generic (PLEG): container finished" podID="0f4e759f-76d3-44b0-ad03-134407e85cd5" containerID="3504667d3af85e677e928085a7b98cda90dac1cc12a9f5e3b62bce7bcdc4a934" exitCode=0 Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.754060 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2lmmb" event={"ID":"0f4e759f-76d3-44b0-ad03-134407e85cd5","Type":"ContainerDied","Data":"3504667d3af85e677e928085a7b98cda90dac1cc12a9f5e3b62bce7bcdc4a934"} Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.754081 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2lmmb" event={"ID":"0f4e759f-76d3-44b0-ad03-134407e85cd5","Type":"ContainerStarted","Data":"aa758c25373b362ce4fe7bd59e72c4c81e193bf54ec2799ccb45dfdbee07cb65"} Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.756142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"519052b1-de37-42f3-8811-9252e225ad9b","Type":"ContainerStarted","Data":"b13cf87ad7befabc7b7ae4c1c51ee3f0ec4d261dcc3b9ddee6ad7f6494a66e45"} Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.756174 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"519052b1-de37-42f3-8811-9252e225ad9b","Type":"ContainerStarted","Data":"a635fcd4b19c34684a8e849c31f2c083742b3cc3c79e77fb749493d98afe8e08"} Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.756724 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.757985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c9ab-account-create-update-6hjgj" event={"ID":"1f41ec84-d27d-4f53-b556-347cd55c7fd4","Type":"ContainerStarted","Data":"3d3354c0a0acd2389ac264294f0eb3debde91ebd3de5b95df816a042175e9b46"} Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.758017 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c9ab-account-create-update-6hjgj" event={"ID":"1f41ec84-d27d-4f53-b556-347cd55c7fd4","Type":"ContainerStarted","Data":"1d1ead900a8ab745b25709c63b3fca00733ad60bee654c59cbca0d4bf2353329"} Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.771172 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-4388-account-create-update-lfh6r" podStartSLOduration=1.7711576180000002 podStartE2EDuration="1.771157618s" podCreationTimestamp="2026-01-23 18:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:05.767050703 +0000 UTC m=+1148.769508656" watchObservedRunningTime="2026-01-23 18:20:05.771157618 +0000 UTC m=+1148.773615551" Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.782272 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c9ab-account-create-update-6hjgj" podStartSLOduration=1.782251435 podStartE2EDuration="1.782251435s" podCreationTimestamp="2026-01-23 18:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:05.780885177 +0000 UTC m=+1148.783343110" watchObservedRunningTime="2026-01-23 18:20:05.782251435 +0000 UTC m=+1148.784709388" Jan 23 18:20:05 crc kubenswrapper[4760]: I0123 18:20:05.823394 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.178730706 podStartE2EDuration="3.823369566s" podCreationTimestamp="2026-01-23 18:20:02 +0000 UTC" firstStartedPulling="2026-01-23 18:20:03.268535578 +0000 UTC m=+1146.270993501" lastFinishedPulling="2026-01-23 18:20:04.913174418 +0000 UTC m=+1147.915632361" observedRunningTime="2026-01-23 18:20:05.815370424 +0000 UTC m=+1148.817828357" watchObservedRunningTime="2026-01-23 18:20:05.823369566 +0000 UTC m=+1148.825827499" Jan 23 18:20:06 crc kubenswrapper[4760]: I0123 18:20:06.284620 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 18:20:06 crc kubenswrapper[4760]: I0123 18:20:06.768309 4760 generic.go:334] "Generic (PLEG): container finished" podID="1f41ec84-d27d-4f53-b556-347cd55c7fd4" containerID="3d3354c0a0acd2389ac264294f0eb3debde91ebd3de5b95df816a042175e9b46" exitCode=0 Jan 23 18:20:06 crc kubenswrapper[4760]: I0123 18:20:06.768368 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c9ab-account-create-update-6hjgj" event={"ID":"1f41ec84-d27d-4f53-b556-347cd55c7fd4","Type":"ContainerDied","Data":"3d3354c0a0acd2389ac264294f0eb3debde91ebd3de5b95df816a042175e9b46"} Jan 23 18:20:06 crc kubenswrapper[4760]: I0123 18:20:06.770940 4760 generic.go:334] "Generic (PLEG): container finished" podID="9cb4d716-63d1-46ae-8134-3ba82ec34339" containerID="5189b22122c2e262c8ec4d58eacded9e1fcd049fdade6c8d8e5e8b736ce4036e" exitCode=0 Jan 23 18:20:06 crc kubenswrapper[4760]: I0123 18:20:06.771051 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4388-account-create-update-lfh6r" event={"ID":"9cb4d716-63d1-46ae-8134-3ba82ec34339","Type":"ContainerDied","Data":"5189b22122c2e262c8ec4d58eacded9e1fcd049fdade6c8d8e5e8b736ce4036e"} Jan 23 18:20:07 crc kubenswrapper[4760]: I0123 18:20:07.243210 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2lmmb" Jan 23 18:20:07 crc kubenswrapper[4760]: I0123 18:20:07.360700 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkv5f\" (UniqueName: \"kubernetes.io/projected/0f4e759f-76d3-44b0-ad03-134407e85cd5-kube-api-access-zkv5f\") pod \"0f4e759f-76d3-44b0-ad03-134407e85cd5\" (UID: \"0f4e759f-76d3-44b0-ad03-134407e85cd5\") " Jan 23 18:20:07 crc kubenswrapper[4760]: I0123 18:20:07.360829 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f4e759f-76d3-44b0-ad03-134407e85cd5-operator-scripts\") pod \"0f4e759f-76d3-44b0-ad03-134407e85cd5\" (UID: \"0f4e759f-76d3-44b0-ad03-134407e85cd5\") " Jan 23 18:20:07 crc kubenswrapper[4760]: I0123 18:20:07.361610 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f4e759f-76d3-44b0-ad03-134407e85cd5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f4e759f-76d3-44b0-ad03-134407e85cd5" (UID: "0f4e759f-76d3-44b0-ad03-134407e85cd5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:07 crc kubenswrapper[4760]: I0123 18:20:07.366488 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4e759f-76d3-44b0-ad03-134407e85cd5-kube-api-access-zkv5f" (OuterVolumeSpecName: "kube-api-access-zkv5f") pod "0f4e759f-76d3-44b0-ad03-134407e85cd5" (UID: "0f4e759f-76d3-44b0-ad03-134407e85cd5"). InnerVolumeSpecName "kube-api-access-zkv5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:07 crc kubenswrapper[4760]: I0123 18:20:07.463751 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkv5f\" (UniqueName: \"kubernetes.io/projected/0f4e759f-76d3-44b0-ad03-134407e85cd5-kube-api-access-zkv5f\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:07 crc kubenswrapper[4760]: I0123 18:20:07.463807 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f4e759f-76d3-44b0-ad03-134407e85cd5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:07 crc kubenswrapper[4760]: I0123 18:20:07.780286 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2lmmb" event={"ID":"0f4e759f-76d3-44b0-ad03-134407e85cd5","Type":"ContainerDied","Data":"aa758c25373b362ce4fe7bd59e72c4c81e193bf54ec2799ccb45dfdbee07cb65"} Jan 23 18:20:07 crc kubenswrapper[4760]: I0123 18:20:07.780376 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa758c25373b362ce4fe7bd59e72c4c81e193bf54ec2799ccb45dfdbee07cb65" Jan 23 18:20:07 crc kubenswrapper[4760]: I0123 18:20:07.781148 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2lmmb" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.182035 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c9ab-account-create-update-6hjgj" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.189942 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4388-account-create-update-lfh6r" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.284375 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb4d716-63d1-46ae-8134-3ba82ec34339-operator-scripts\") pod \"9cb4d716-63d1-46ae-8134-3ba82ec34339\" (UID: \"9cb4d716-63d1-46ae-8134-3ba82ec34339\") " Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.284494 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlhbn\" (UniqueName: \"kubernetes.io/projected/9cb4d716-63d1-46ae-8134-3ba82ec34339-kube-api-access-dlhbn\") pod \"9cb4d716-63d1-46ae-8134-3ba82ec34339\" (UID: \"9cb4d716-63d1-46ae-8134-3ba82ec34339\") " Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.284643 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbk66\" (UniqueName: \"kubernetes.io/projected/1f41ec84-d27d-4f53-b556-347cd55c7fd4-kube-api-access-nbk66\") pod \"1f41ec84-d27d-4f53-b556-347cd55c7fd4\" (UID: \"1f41ec84-d27d-4f53-b556-347cd55c7fd4\") " Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.284921 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cb4d716-63d1-46ae-8134-3ba82ec34339-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9cb4d716-63d1-46ae-8134-3ba82ec34339" (UID: "9cb4d716-63d1-46ae-8134-3ba82ec34339"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.285399 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f41ec84-d27d-4f53-b556-347cd55c7fd4-operator-scripts\") pod \"1f41ec84-d27d-4f53-b556-347cd55c7fd4\" (UID: \"1f41ec84-d27d-4f53-b556-347cd55c7fd4\") " Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.285871 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f41ec84-d27d-4f53-b556-347cd55c7fd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f41ec84-d27d-4f53-b556-347cd55c7fd4" (UID: "1f41ec84-d27d-4f53-b556-347cd55c7fd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.286165 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f41ec84-d27d-4f53-b556-347cd55c7fd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.286189 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb4d716-63d1-46ae-8134-3ba82ec34339-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.302744 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f41ec84-d27d-4f53-b556-347cd55c7fd4-kube-api-access-nbk66" (OuterVolumeSpecName: "kube-api-access-nbk66") pod "1f41ec84-d27d-4f53-b556-347cd55c7fd4" (UID: "1f41ec84-d27d-4f53-b556-347cd55c7fd4"). InnerVolumeSpecName "kube-api-access-nbk66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.302809 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb4d716-63d1-46ae-8134-3ba82ec34339-kube-api-access-dlhbn" (OuterVolumeSpecName: "kube-api-access-dlhbn") pod "9cb4d716-63d1-46ae-8134-3ba82ec34339" (UID: "9cb4d716-63d1-46ae-8134-3ba82ec34339"). InnerVolumeSpecName "kube-api-access-dlhbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.387970 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbk66\" (UniqueName: \"kubernetes.io/projected/1f41ec84-d27d-4f53-b556-347cd55c7fd4-kube-api-access-nbk66\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.388021 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlhbn\" (UniqueName: \"kubernetes.io/projected/9cb4d716-63d1-46ae-8134-3ba82ec34339-kube-api-access-dlhbn\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.791140 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c9ab-account-create-update-6hjgj" event={"ID":"1f41ec84-d27d-4f53-b556-347cd55c7fd4","Type":"ContainerDied","Data":"1d1ead900a8ab745b25709c63b3fca00733ad60bee654c59cbca0d4bf2353329"} Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.791472 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d1ead900a8ab745b25709c63b3fca00733ad60bee654c59cbca0d4bf2353329" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.791547 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c9ab-account-create-update-6hjgj" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.794334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4388-account-create-update-lfh6r" event={"ID":"9cb4d716-63d1-46ae-8134-3ba82ec34339","Type":"ContainerDied","Data":"e822df1d4795b0535cc0ed56d56e61aaca55b1add49bbad817d0b9ca38162398"} Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.794397 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e822df1d4795b0535cc0ed56d56e61aaca55b1add49bbad817d0b9ca38162398" Jan 23 18:20:08 crc kubenswrapper[4760]: I0123 18:20:08.794523 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4388-account-create-update-lfh6r" Jan 23 18:20:09 crc kubenswrapper[4760]: I0123 18:20:09.931150 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-lqcnv"] Jan 23 18:20:09 crc kubenswrapper[4760]: E0123 18:20:09.931491 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb4d716-63d1-46ae-8134-3ba82ec34339" containerName="mariadb-account-create-update" Jan 23 18:20:09 crc kubenswrapper[4760]: I0123 18:20:09.931504 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb4d716-63d1-46ae-8134-3ba82ec34339" containerName="mariadb-account-create-update" Jan 23 18:20:09 crc kubenswrapper[4760]: E0123 18:20:09.931516 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4e759f-76d3-44b0-ad03-134407e85cd5" containerName="mariadb-database-create" Jan 23 18:20:09 crc kubenswrapper[4760]: I0123 18:20:09.931522 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4e759f-76d3-44b0-ad03-134407e85cd5" containerName="mariadb-database-create" Jan 23 18:20:09 crc kubenswrapper[4760]: E0123 18:20:09.931550 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f41ec84-d27d-4f53-b556-347cd55c7fd4" containerName="mariadb-account-create-update" Jan 23 18:20:09 crc kubenswrapper[4760]: I0123 18:20:09.931557 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f41ec84-d27d-4f53-b556-347cd55c7fd4" containerName="mariadb-account-create-update" Jan 23 18:20:09 crc kubenswrapper[4760]: I0123 18:20:09.931694 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f41ec84-d27d-4f53-b556-347cd55c7fd4" containerName="mariadb-account-create-update" Jan 23 18:20:09 crc kubenswrapper[4760]: I0123 18:20:09.931703 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb4d716-63d1-46ae-8134-3ba82ec34339" containerName="mariadb-account-create-update" Jan 23 18:20:09 crc kubenswrapper[4760]: I0123 18:20:09.931719 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4e759f-76d3-44b0-ad03-134407e85cd5" containerName="mariadb-database-create" Jan 23 18:20:09 crc kubenswrapper[4760]: I0123 18:20:09.932303 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lqcnv" Jan 23 18:20:09 crc kubenswrapper[4760]: I0123 18:20:09.941829 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lqcnv"] Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.011928 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-operator-scripts\") pod \"glance-db-create-lqcnv\" (UID: \"a4e438fd-20d8-4fd5-966b-7166b9dc8fed\") " pod="openstack/glance-db-create-lqcnv" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.011991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vznhq\" (UniqueName: \"kubernetes.io/projected/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-kube-api-access-vznhq\") pod \"glance-db-create-lqcnv\" (UID: \"a4e438fd-20d8-4fd5-966b-7166b9dc8fed\") " pod="openstack/glance-db-create-lqcnv" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.053651 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8c35-account-create-update-wrntz"] Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.054837 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8c35-account-create-update-wrntz" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.056910 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.060218 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8c35-account-create-update-wrntz"] Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.113951 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-operator-scripts\") pod \"glance-db-create-lqcnv\" (UID: \"a4e438fd-20d8-4fd5-966b-7166b9dc8fed\") " pod="openstack/glance-db-create-lqcnv" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.114010 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14f13de4-d7c3-4501-9f08-c98dcaac24c7-operator-scripts\") pod \"glance-8c35-account-create-update-wrntz\" (UID: \"14f13de4-d7c3-4501-9f08-c98dcaac24c7\") " pod="openstack/glance-8c35-account-create-update-wrntz" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.114037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vznhq\" (UniqueName: \"kubernetes.io/projected/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-kube-api-access-vznhq\") pod \"glance-db-create-lqcnv\" (UID: \"a4e438fd-20d8-4fd5-966b-7166b9dc8fed\") " pod="openstack/glance-db-create-lqcnv" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.114110 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp5n4\" (UniqueName: \"kubernetes.io/projected/14f13de4-d7c3-4501-9f08-c98dcaac24c7-kube-api-access-rp5n4\") pod \"glance-8c35-account-create-update-wrntz\" (UID: \"14f13de4-d7c3-4501-9f08-c98dcaac24c7\") " pod="openstack/glance-8c35-account-create-update-wrntz" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.114904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-operator-scripts\") pod \"glance-db-create-lqcnv\" (UID: \"a4e438fd-20d8-4fd5-966b-7166b9dc8fed\") " pod="openstack/glance-db-create-lqcnv" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.132214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vznhq\" (UniqueName: \"kubernetes.io/projected/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-kube-api-access-vznhq\") pod \"glance-db-create-lqcnv\" (UID: \"a4e438fd-20d8-4fd5-966b-7166b9dc8fed\") " pod="openstack/glance-db-create-lqcnv" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.215224 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp5n4\" (UniqueName: \"kubernetes.io/projected/14f13de4-d7c3-4501-9f08-c98dcaac24c7-kube-api-access-rp5n4\") pod \"glance-8c35-account-create-update-wrntz\" (UID: \"14f13de4-d7c3-4501-9f08-c98dcaac24c7\") " pod="openstack/glance-8c35-account-create-update-wrntz" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.215315 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14f13de4-d7c3-4501-9f08-c98dcaac24c7-operator-scripts\") pod \"glance-8c35-account-create-update-wrntz\" (UID: \"14f13de4-d7c3-4501-9f08-c98dcaac24c7\") " pod="openstack/glance-8c35-account-create-update-wrntz" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.216114 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14f13de4-d7c3-4501-9f08-c98dcaac24c7-operator-scripts\") pod \"glance-8c35-account-create-update-wrntz\" (UID: \"14f13de4-d7c3-4501-9f08-c98dcaac24c7\") " pod="openstack/glance-8c35-account-create-update-wrntz" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.231561 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp5n4\" (UniqueName: \"kubernetes.io/projected/14f13de4-d7c3-4501-9f08-c98dcaac24c7-kube-api-access-rp5n4\") pod \"glance-8c35-account-create-update-wrntz\" (UID: \"14f13de4-d7c3-4501-9f08-c98dcaac24c7\") " pod="openstack/glance-8c35-account-create-update-wrntz" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.250809 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lqcnv" Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.368266 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8c35-account-create-update-wrntz" Jan 23 18:20:10 crc kubenswrapper[4760]: W0123 18:20:10.697627 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4e438fd_20d8_4fd5_966b_7166b9dc8fed.slice/crio-4671e51aa4735727aae283d2a37d47b6d380c99e31a63a7b3e05340145a85052 WatchSource:0}: Error finding container 4671e51aa4735727aae283d2a37d47b6d380c99e31a63a7b3e05340145a85052: Status 404 returned error can't find the container with id 4671e51aa4735727aae283d2a37d47b6d380c99e31a63a7b3e05340145a85052 Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.703799 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lqcnv"] Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.799965 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8c35-account-create-update-wrntz"] Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.807677 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lqcnv" event={"ID":"a4e438fd-20d8-4fd5-966b-7166b9dc8fed","Type":"ContainerStarted","Data":"4671e51aa4735727aae283d2a37d47b6d380c99e31a63a7b3e05340145a85052"} Jan 23 18:20:10 crc kubenswrapper[4760]: I0123 18:20:10.809315 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8c35-account-create-update-wrntz" event={"ID":"14f13de4-d7c3-4501-9f08-c98dcaac24c7","Type":"ContainerStarted","Data":"f23d39a58fab05105d8230b924b596a5c98d9c16ba12638aac6e147d9e84fa07"} Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.391772 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fhhkh"] Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.393040 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fhhkh" Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.400266 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.403676 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fhhkh"] Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.436094 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7n4f\" (UniqueName: \"kubernetes.io/projected/d24761f8-e610-4947-a62f-83b495cbf71b-kube-api-access-n7n4f\") pod \"root-account-create-update-fhhkh\" (UID: \"d24761f8-e610-4947-a62f-83b495cbf71b\") " pod="openstack/root-account-create-update-fhhkh" Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.436205 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d24761f8-e610-4947-a62f-83b495cbf71b-operator-scripts\") pod \"root-account-create-update-fhhkh\" (UID: \"d24761f8-e610-4947-a62f-83b495cbf71b\") " pod="openstack/root-account-create-update-fhhkh" Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.537272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d24761f8-e610-4947-a62f-83b495cbf71b-operator-scripts\") pod \"root-account-create-update-fhhkh\" (UID: \"d24761f8-e610-4947-a62f-83b495cbf71b\") " pod="openstack/root-account-create-update-fhhkh" Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.537436 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7n4f\" (UniqueName: \"kubernetes.io/projected/d24761f8-e610-4947-a62f-83b495cbf71b-kube-api-access-n7n4f\") pod \"root-account-create-update-fhhkh\" (UID: \"d24761f8-e610-4947-a62f-83b495cbf71b\") " pod="openstack/root-account-create-update-fhhkh" Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.538655 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d24761f8-e610-4947-a62f-83b495cbf71b-operator-scripts\") pod \"root-account-create-update-fhhkh\" (UID: \"d24761f8-e610-4947-a62f-83b495cbf71b\") " pod="openstack/root-account-create-update-fhhkh" Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.557247 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7n4f\" (UniqueName: \"kubernetes.io/projected/d24761f8-e610-4947-a62f-83b495cbf71b-kube-api-access-n7n4f\") pod \"root-account-create-update-fhhkh\" (UID: \"d24761f8-e610-4947-a62f-83b495cbf71b\") " pod="openstack/root-account-create-update-fhhkh" Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.711138 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fhhkh" Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.836095 4760 generic.go:334] "Generic (PLEG): container finished" podID="14f13de4-d7c3-4501-9f08-c98dcaac24c7" containerID="ad950e727b9dd98dd5ba4f71052d2e1962f9982efb276569be0facc11431ed68" exitCode=0 Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.836524 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8c35-account-create-update-wrntz" event={"ID":"14f13de4-d7c3-4501-9f08-c98dcaac24c7","Type":"ContainerDied","Data":"ad950e727b9dd98dd5ba4f71052d2e1962f9982efb276569be0facc11431ed68"} Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.841503 4760 generic.go:334] "Generic (PLEG): container finished" podID="a4e438fd-20d8-4fd5-966b-7166b9dc8fed" containerID="b4253915f7c44fe9583f96296c8cac84b3d35a9a94dac37206815f713efc6d7c" exitCode=0 Jan 23 18:20:11 crc kubenswrapper[4760]: I0123 18:20:11.841540 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lqcnv" event={"ID":"a4e438fd-20d8-4fd5-966b-7166b9dc8fed","Type":"ContainerDied","Data":"b4253915f7c44fe9583f96296c8cac84b3d35a9a94dac37206815f713efc6d7c"} Jan 23 18:20:12 crc kubenswrapper[4760]: I0123 18:20:12.142651 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fhhkh"] Jan 23 18:20:12 crc kubenswrapper[4760]: W0123 18:20:12.156707 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd24761f8_e610_4947_a62f_83b495cbf71b.slice/crio-91286ba61d31a36df7b77cca9398d63a6b13f45f7164a709aba73c42be712f21 WatchSource:0}: Error finding container 91286ba61d31a36df7b77cca9398d63a6b13f45f7164a709aba73c42be712f21: Status 404 returned error can't find the container with id 91286ba61d31a36df7b77cca9398d63a6b13f45f7164a709aba73c42be712f21 Jan 23 18:20:12 crc kubenswrapper[4760]: I0123 18:20:12.713559 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:12 crc kubenswrapper[4760]: I0123 18:20:12.785990 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rtdnw"] Jan 23 18:20:12 crc kubenswrapper[4760]: I0123 18:20:12.786242 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" podUID="a3d40e30-e607-4d95-99ec-0d97b415eddb" containerName="dnsmasq-dns" containerID="cri-o://f6019de0b773b0a73648966d7e011d1a0b86c259c3c50954ad31ce8f9614c132" gracePeriod=10 Jan 23 18:20:12 crc kubenswrapper[4760]: E0123 18:20:12.843909 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3d40e30_e607_4d95_99ec_0d97b415eddb.slice/crio-f6019de0b773b0a73648966d7e011d1a0b86c259c3c50954ad31ce8f9614c132.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:20:12 crc kubenswrapper[4760]: I0123 18:20:12.848756 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fhhkh" event={"ID":"d24761f8-e610-4947-a62f-83b495cbf71b","Type":"ContainerStarted","Data":"91286ba61d31a36df7b77cca9398d63a6b13f45f7164a709aba73c42be712f21"} Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.283332 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lqcnv" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.289625 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8c35-account-create-update-wrntz" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.381118 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp5n4\" (UniqueName: \"kubernetes.io/projected/14f13de4-d7c3-4501-9f08-c98dcaac24c7-kube-api-access-rp5n4\") pod \"14f13de4-d7c3-4501-9f08-c98dcaac24c7\" (UID: \"14f13de4-d7c3-4501-9f08-c98dcaac24c7\") " Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.381488 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vznhq\" (UniqueName: \"kubernetes.io/projected/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-kube-api-access-vznhq\") pod \"a4e438fd-20d8-4fd5-966b-7166b9dc8fed\" (UID: \"a4e438fd-20d8-4fd5-966b-7166b9dc8fed\") " Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.381596 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-operator-scripts\") pod \"a4e438fd-20d8-4fd5-966b-7166b9dc8fed\" (UID: \"a4e438fd-20d8-4fd5-966b-7166b9dc8fed\") " Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.381664 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14f13de4-d7c3-4501-9f08-c98dcaac24c7-operator-scripts\") pod \"14f13de4-d7c3-4501-9f08-c98dcaac24c7\" (UID: \"14f13de4-d7c3-4501-9f08-c98dcaac24c7\") " Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.382470 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4e438fd-20d8-4fd5-966b-7166b9dc8fed" (UID: "a4e438fd-20d8-4fd5-966b-7166b9dc8fed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.382546 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f13de4-d7c3-4501-9f08-c98dcaac24c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14f13de4-d7c3-4501-9f08-c98dcaac24c7" (UID: "14f13de4-d7c3-4501-9f08-c98dcaac24c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.387111 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-kube-api-access-vznhq" (OuterVolumeSpecName: "kube-api-access-vznhq") pod "a4e438fd-20d8-4fd5-966b-7166b9dc8fed" (UID: "a4e438fd-20d8-4fd5-966b-7166b9dc8fed"). InnerVolumeSpecName "kube-api-access-vznhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.387253 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f13de4-d7c3-4501-9f08-c98dcaac24c7-kube-api-access-rp5n4" (OuterVolumeSpecName: "kube-api-access-rp5n4") pod "14f13de4-d7c3-4501-9f08-c98dcaac24c7" (UID: "14f13de4-d7c3-4501-9f08-c98dcaac24c7"). InnerVolumeSpecName "kube-api-access-rp5n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.483748 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.483786 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14f13de4-d7c3-4501-9f08-c98dcaac24c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.483798 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp5n4\" (UniqueName: \"kubernetes.io/projected/14f13de4-d7c3-4501-9f08-c98dcaac24c7-kube-api-access-rp5n4\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.483814 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vznhq\" (UniqueName: \"kubernetes.io/projected/a4e438fd-20d8-4fd5-966b-7166b9dc8fed-kube-api-access-vznhq\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.858433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lqcnv" event={"ID":"a4e438fd-20d8-4fd5-966b-7166b9dc8fed","Type":"ContainerDied","Data":"4671e51aa4735727aae283d2a37d47b6d380c99e31a63a7b3e05340145a85052"} Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.859243 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4671e51aa4735727aae283d2a37d47b6d380c99e31a63a7b3e05340145a85052" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.858523 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lqcnv" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.860220 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8c35-account-create-update-wrntz" event={"ID":"14f13de4-d7c3-4501-9f08-c98dcaac24c7","Type":"ContainerDied","Data":"f23d39a58fab05105d8230b924b596a5c98d9c16ba12638aac6e147d9e84fa07"} Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.860253 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8c35-account-create-update-wrntz" Jan 23 18:20:13 crc kubenswrapper[4760]: I0123 18:20:13.860259 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f23d39a58fab05105d8230b924b596a5c98d9c16ba12638aac6e147d9e84fa07" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.020198 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-cgcqh"] Jan 23 18:20:14 crc kubenswrapper[4760]: E0123 18:20:14.020535 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e438fd-20d8-4fd5-966b-7166b9dc8fed" containerName="mariadb-database-create" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.020550 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e438fd-20d8-4fd5-966b-7166b9dc8fed" containerName="mariadb-database-create" Jan 23 18:20:14 crc kubenswrapper[4760]: E0123 18:20:14.020576 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f13de4-d7c3-4501-9f08-c98dcaac24c7" containerName="mariadb-account-create-update" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.020584 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f13de4-d7c3-4501-9f08-c98dcaac24c7" containerName="mariadb-account-create-update" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.020746 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f13de4-d7c3-4501-9f08-c98dcaac24c7" containerName="mariadb-account-create-update" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.020769 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e438fd-20d8-4fd5-966b-7166b9dc8fed" containerName="mariadb-database-create" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.021252 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cgcqh" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.037884 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cgcqh"] Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.091055 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp882\" (UniqueName: \"kubernetes.io/projected/653cbf70-380b-4f51-9ec9-4338348956ee-kube-api-access-gp882\") pod \"keystone-db-create-cgcqh\" (UID: \"653cbf70-380b-4f51-9ec9-4338348956ee\") " pod="openstack/keystone-db-create-cgcqh" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.091118 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/653cbf70-380b-4f51-9ec9-4338348956ee-operator-scripts\") pod \"keystone-db-create-cgcqh\" (UID: \"653cbf70-380b-4f51-9ec9-4338348956ee\") " pod="openstack/keystone-db-create-cgcqh" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.193108 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp882\" (UniqueName: \"kubernetes.io/projected/653cbf70-380b-4f51-9ec9-4338348956ee-kube-api-access-gp882\") pod \"keystone-db-create-cgcqh\" (UID: \"653cbf70-380b-4f51-9ec9-4338348956ee\") " pod="openstack/keystone-db-create-cgcqh" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.193234 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/653cbf70-380b-4f51-9ec9-4338348956ee-operator-scripts\") pod \"keystone-db-create-cgcqh\" (UID: \"653cbf70-380b-4f51-9ec9-4338348956ee\") " pod="openstack/keystone-db-create-cgcqh" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.194893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/653cbf70-380b-4f51-9ec9-4338348956ee-operator-scripts\") pod \"keystone-db-create-cgcqh\" (UID: \"653cbf70-380b-4f51-9ec9-4338348956ee\") " pod="openstack/keystone-db-create-cgcqh" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.219593 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp882\" (UniqueName: \"kubernetes.io/projected/653cbf70-380b-4f51-9ec9-4338348956ee-kube-api-access-gp882\") pod \"keystone-db-create-cgcqh\" (UID: \"653cbf70-380b-4f51-9ec9-4338348956ee\") " pod="openstack/keystone-db-create-cgcqh" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.335679 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cgcqh" Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.769992 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cgcqh"] Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.871570 4760 generic.go:334] "Generic (PLEG): container finished" podID="d24761f8-e610-4947-a62f-83b495cbf71b" containerID="0dd956d7ca4b5cf9eff761ce1aa02ff09a89c1b3d512ef742967c784ca7b90aa" exitCode=0 Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.871975 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fhhkh" event={"ID":"d24761f8-e610-4947-a62f-83b495cbf71b","Type":"ContainerDied","Data":"0dd956d7ca4b5cf9eff761ce1aa02ff09a89c1b3d512ef742967c784ca7b90aa"} Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.873436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cgcqh" event={"ID":"653cbf70-380b-4f51-9ec9-4338348956ee","Type":"ContainerStarted","Data":"533b46b4264416d53d6f6ec6af30bc9b83bd20fd84cbe174dd9fa8269ebfe32c"} Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.875327 4760 generic.go:334] "Generic (PLEG): container finished" podID="a3d40e30-e607-4d95-99ec-0d97b415eddb" containerID="f6019de0b773b0a73648966d7e011d1a0b86c259c3c50954ad31ce8f9614c132" exitCode=0 Jan 23 18:20:14 crc kubenswrapper[4760]: I0123 18:20:14.875347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" event={"ID":"a3d40e30-e607-4d95-99ec-0d97b415eddb","Type":"ContainerDied","Data":"f6019de0b773b0a73648966d7e011d1a0b86c259c3c50954ad31ce8f9614c132"} Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.014058 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.111257 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlsl5\" (UniqueName: \"kubernetes.io/projected/a3d40e30-e607-4d95-99ec-0d97b415eddb-kube-api-access-zlsl5\") pod \"a3d40e30-e607-4d95-99ec-0d97b415eddb\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.111366 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-dns-svc\") pod \"a3d40e30-e607-4d95-99ec-0d97b415eddb\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.111425 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-config\") pod \"a3d40e30-e607-4d95-99ec-0d97b415eddb\" (UID: \"a3d40e30-e607-4d95-99ec-0d97b415eddb\") " Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.116578 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d40e30-e607-4d95-99ec-0d97b415eddb-kube-api-access-zlsl5" (OuterVolumeSpecName: "kube-api-access-zlsl5") pod "a3d40e30-e607-4d95-99ec-0d97b415eddb" (UID: "a3d40e30-e607-4d95-99ec-0d97b415eddb"). InnerVolumeSpecName "kube-api-access-zlsl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.152061 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-config" (OuterVolumeSpecName: "config") pod "a3d40e30-e607-4d95-99ec-0d97b415eddb" (UID: "a3d40e30-e607-4d95-99ec-0d97b415eddb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.153247 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3d40e30-e607-4d95-99ec-0d97b415eddb" (UID: "a3d40e30-e607-4d95-99ec-0d97b415eddb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.213546 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.213735 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d40e30-e607-4d95-99ec-0d97b415eddb-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.213794 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlsl5\" (UniqueName: \"kubernetes.io/projected/a3d40e30-e607-4d95-99ec-0d97b415eddb-kube-api-access-zlsl5\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.331533 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-p4m6p"] Jan 23 18:20:15 crc kubenswrapper[4760]: E0123 18:20:15.331862 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d40e30-e607-4d95-99ec-0d97b415eddb" containerName="dnsmasq-dns" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.331877 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d40e30-e607-4d95-99ec-0d97b415eddb" containerName="dnsmasq-dns" Jan 23 18:20:15 crc kubenswrapper[4760]: E0123 18:20:15.331896 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d40e30-e607-4d95-99ec-0d97b415eddb" containerName="init" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.331902 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d40e30-e607-4d95-99ec-0d97b415eddb" containerName="init" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.332066 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d40e30-e607-4d95-99ec-0d97b415eddb" containerName="dnsmasq-dns" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.332538 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.336274 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ttxth" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.342430 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.392334 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-p4m6p"] Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.418129 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-db-sync-config-data\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.418173 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pknp\" (UniqueName: \"kubernetes.io/projected/75c11195-65a7-41d7-857c-15a8962cd2e3-kube-api-access-2pknp\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.418195 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-combined-ca-bundle\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.418377 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-config-data\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.520059 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pknp\" (UniqueName: \"kubernetes.io/projected/75c11195-65a7-41d7-857c-15a8962cd2e3-kube-api-access-2pknp\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.520125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-combined-ca-bundle\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.520214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-config-data\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.520381 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-db-sync-config-data\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.525173 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-db-sync-config-data\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.525260 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-combined-ca-bundle\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.530027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-config-data\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.539460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pknp\" (UniqueName: \"kubernetes.io/projected/75c11195-65a7-41d7-857c-15a8962cd2e3-kube-api-access-2pknp\") pod \"glance-db-sync-p4m6p\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.645435 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.885157 4760 generic.go:334] "Generic (PLEG): container finished" podID="653cbf70-380b-4f51-9ec9-4338348956ee" containerID="955f544d3528018e131302f1ce66c1de5989e95de1da4c99a22966c1c952957f" exitCode=0 Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.885258 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cgcqh" event={"ID":"653cbf70-380b-4f51-9ec9-4338348956ee","Type":"ContainerDied","Data":"955f544d3528018e131302f1ce66c1de5989e95de1da4c99a22966c1c952957f"} Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.887597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" event={"ID":"a3d40e30-e607-4d95-99ec-0d97b415eddb","Type":"ContainerDied","Data":"cbdeefa1f16ddea6dc90e463acbcf8e619c5526d5faaaab21b90b68dabf6a48b"} Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.887669 4760 scope.go:117] "RemoveContainer" containerID="f6019de0b773b0a73648966d7e011d1a0b86c259c3c50954ad31ce8f9614c132" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.888605 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rtdnw" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.914127 4760 scope.go:117] "RemoveContainer" containerID="671ba0b2068c2743cb4a4eeec0b209079be7117acfeeb719de30e9bd31b8accc" Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.921488 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rtdnw"] Jan 23 18:20:15 crc kubenswrapper[4760]: I0123 18:20:15.926789 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rtdnw"] Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.177632 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fhhkh" Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.215255 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-p4m6p"] Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.229594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d24761f8-e610-4947-a62f-83b495cbf71b-operator-scripts\") pod \"d24761f8-e610-4947-a62f-83b495cbf71b\" (UID: \"d24761f8-e610-4947-a62f-83b495cbf71b\") " Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.229765 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7n4f\" (UniqueName: \"kubernetes.io/projected/d24761f8-e610-4947-a62f-83b495cbf71b-kube-api-access-n7n4f\") pod \"d24761f8-e610-4947-a62f-83b495cbf71b\" (UID: \"d24761f8-e610-4947-a62f-83b495cbf71b\") " Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.231176 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d24761f8-e610-4947-a62f-83b495cbf71b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d24761f8-e610-4947-a62f-83b495cbf71b" (UID: "d24761f8-e610-4947-a62f-83b495cbf71b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.237069 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24761f8-e610-4947-a62f-83b495cbf71b-kube-api-access-n7n4f" (OuterVolumeSpecName: "kube-api-access-n7n4f") pod "d24761f8-e610-4947-a62f-83b495cbf71b" (UID: "d24761f8-e610-4947-a62f-83b495cbf71b"). InnerVolumeSpecName "kube-api-access-n7n4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.332145 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7n4f\" (UniqueName: \"kubernetes.io/projected/d24761f8-e610-4947-a62f-83b495cbf71b-kube-api-access-n7n4f\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.332605 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d24761f8-e610-4947-a62f-83b495cbf71b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.899139 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fhhkh" event={"ID":"d24761f8-e610-4947-a62f-83b495cbf71b","Type":"ContainerDied","Data":"91286ba61d31a36df7b77cca9398d63a6b13f45f7164a709aba73c42be712f21"} Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.899190 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91286ba61d31a36df7b77cca9398d63a6b13f45f7164a709aba73c42be712f21" Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.899251 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fhhkh" Jan 23 18:20:16 crc kubenswrapper[4760]: I0123 18:20:16.903294 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p4m6p" event={"ID":"75c11195-65a7-41d7-857c-15a8962cd2e3","Type":"ContainerStarted","Data":"4362f10c04fd859d741d3869ce893c332ed722fd3f7418f9f0da6bb2e2c0f927"} Jan 23 18:20:17 crc kubenswrapper[4760]: I0123 18:20:17.219786 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cgcqh" Jan 23 18:20:17 crc kubenswrapper[4760]: I0123 18:20:17.348031 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/653cbf70-380b-4f51-9ec9-4338348956ee-operator-scripts\") pod \"653cbf70-380b-4f51-9ec9-4338348956ee\" (UID: \"653cbf70-380b-4f51-9ec9-4338348956ee\") " Jan 23 18:20:17 crc kubenswrapper[4760]: I0123 18:20:17.348094 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp882\" (UniqueName: \"kubernetes.io/projected/653cbf70-380b-4f51-9ec9-4338348956ee-kube-api-access-gp882\") pod \"653cbf70-380b-4f51-9ec9-4338348956ee\" (UID: \"653cbf70-380b-4f51-9ec9-4338348956ee\") " Jan 23 18:20:17 crc kubenswrapper[4760]: I0123 18:20:17.348641 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653cbf70-380b-4f51-9ec9-4338348956ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "653cbf70-380b-4f51-9ec9-4338348956ee" (UID: "653cbf70-380b-4f51-9ec9-4338348956ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:17 crc kubenswrapper[4760]: I0123 18:20:17.352591 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653cbf70-380b-4f51-9ec9-4338348956ee-kube-api-access-gp882" (OuterVolumeSpecName: "kube-api-access-gp882") pod "653cbf70-380b-4f51-9ec9-4338348956ee" (UID: "653cbf70-380b-4f51-9ec9-4338348956ee"). InnerVolumeSpecName "kube-api-access-gp882". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:17 crc kubenswrapper[4760]: I0123 18:20:17.449805 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/653cbf70-380b-4f51-9ec9-4338348956ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:17 crc kubenswrapper[4760]: I0123 18:20:17.449846 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp882\" (UniqueName: \"kubernetes.io/projected/653cbf70-380b-4f51-9ec9-4338348956ee-kube-api-access-gp882\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:18 crc kubenswrapper[4760]: I0123 18:20:18.387848 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d40e30-e607-4d95-99ec-0d97b415eddb" path="/var/lib/kubelet/pods/a3d40e30-e607-4d95-99ec-0d97b415eddb/volumes" Jan 23 18:20:18 crc kubenswrapper[4760]: I0123 18:20:18.394951 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cgcqh" Jan 23 18:20:18 crc kubenswrapper[4760]: I0123 18:20:18.394837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cgcqh" event={"ID":"653cbf70-380b-4f51-9ec9-4338348956ee","Type":"ContainerDied","Data":"533b46b4264416d53d6f6ec6af30bc9b83bd20fd84cbe174dd9fa8269ebfe32c"} Jan 23 18:20:18 crc kubenswrapper[4760]: I0123 18:20:18.396166 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="533b46b4264416d53d6f6ec6af30bc9b83bd20fd84cbe174dd9fa8269ebfe32c" Jan 23 18:20:18 crc kubenswrapper[4760]: I0123 18:20:18.396946 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fhhkh"] Jan 23 18:20:18 crc kubenswrapper[4760]: I0123 18:20:18.407144 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fhhkh"] Jan 23 18:20:18 crc kubenswrapper[4760]: I0123 18:20:18.431284 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 18:20:19 crc kubenswrapper[4760]: I0123 18:20:19.607967 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d24761f8-e610-4947-a62f-83b495cbf71b" path="/var/lib/kubelet/pods/d24761f8-e610-4947-a62f-83b495cbf71b/volumes" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.431561 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nc2ws"] Jan 23 18:20:21 crc kubenswrapper[4760]: E0123 18:20:21.432007 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24761f8-e610-4947-a62f-83b495cbf71b" containerName="mariadb-account-create-update" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.432024 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24761f8-e610-4947-a62f-83b495cbf71b" containerName="mariadb-account-create-update" Jan 23 18:20:21 crc kubenswrapper[4760]: E0123 18:20:21.432046 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653cbf70-380b-4f51-9ec9-4338348956ee" containerName="mariadb-database-create" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.432055 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="653cbf70-380b-4f51-9ec9-4338348956ee" containerName="mariadb-database-create" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.432293 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="653cbf70-380b-4f51-9ec9-4338348956ee" containerName="mariadb-database-create" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.432329 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d24761f8-e610-4947-a62f-83b495cbf71b" containerName="mariadb-account-create-update" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.433190 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nc2ws" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.436218 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.439385 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nc2ws"] Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.604789 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sp4j\" (UniqueName: \"kubernetes.io/projected/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-kube-api-access-7sp4j\") pod \"root-account-create-update-nc2ws\" (UID: \"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83\") " pod="openstack/root-account-create-update-nc2ws" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.604860 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-operator-scripts\") pod \"root-account-create-update-nc2ws\" (UID: \"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83\") " pod="openstack/root-account-create-update-nc2ws" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.707125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sp4j\" (UniqueName: \"kubernetes.io/projected/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-kube-api-access-7sp4j\") pod \"root-account-create-update-nc2ws\" (UID: \"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83\") " pod="openstack/root-account-create-update-nc2ws" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.707213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-operator-scripts\") pod \"root-account-create-update-nc2ws\" (UID: \"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83\") " pod="openstack/root-account-create-update-nc2ws" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.708903 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-operator-scripts\") pod \"root-account-create-update-nc2ws\" (UID: \"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83\") " pod="openstack/root-account-create-update-nc2ws" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.738836 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sp4j\" (UniqueName: \"kubernetes.io/projected/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-kube-api-access-7sp4j\") pod \"root-account-create-update-nc2ws\" (UID: \"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83\") " pod="openstack/root-account-create-update-nc2ws" Jan 23 18:20:21 crc kubenswrapper[4760]: I0123 18:20:21.758047 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nc2ws" Jan 23 18:20:22 crc kubenswrapper[4760]: I0123 18:20:22.240728 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nc2ws"] Jan 23 18:20:22 crc kubenswrapper[4760]: W0123 18:20:22.250967 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bf7aac3_424b_4b13_b3f1_1fa7aa4fae83.slice/crio-58f7b1eaa7caf0dbdf3acceaad221937387987437d4f7ec65f26fc25f78602a8 WatchSource:0}: Error finding container 58f7b1eaa7caf0dbdf3acceaad221937387987437d4f7ec65f26fc25f78602a8: Status 404 returned error can't find the container with id 58f7b1eaa7caf0dbdf3acceaad221937387987437d4f7ec65f26fc25f78602a8 Jan 23 18:20:22 crc kubenswrapper[4760]: I0123 18:20:22.432013 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nc2ws" event={"ID":"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83","Type":"ContainerStarted","Data":"58f7b1eaa7caf0dbdf3acceaad221937387987437d4f7ec65f26fc25f78602a8"} Jan 23 18:20:23 crc kubenswrapper[4760]: E0123 18:20:23.053711 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bf7aac3_424b_4b13_b3f1_1fa7aa4fae83.slice/crio-4944bba3629f638b2143930ccfbba37215e4e646b502848974517e8dabb05253.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:20:23 crc kubenswrapper[4760]: I0123 18:20:23.440030 4760 generic.go:334] "Generic (PLEG): container finished" podID="3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83" containerID="4944bba3629f638b2143930ccfbba37215e4e646b502848974517e8dabb05253" exitCode=0 Jan 23 18:20:23 crc kubenswrapper[4760]: I0123 18:20:23.440237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nc2ws" event={"ID":"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83","Type":"ContainerDied","Data":"4944bba3629f638b2143930ccfbba37215e4e646b502848974517e8dabb05253"} Jan 23 18:20:25 crc kubenswrapper[4760]: I0123 18:20:25.298474 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2wpph" podUID="ea0533e4-88c1-4a03-93a9-f0680acaafc5" containerName="ovn-controller" probeResult="failure" output=< Jan 23 18:20:25 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 18:20:25 crc kubenswrapper[4760]: > Jan 23 18:20:25 crc kubenswrapper[4760]: I0123 18:20:25.343000 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.093995 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nc2ws" Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.224626 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-operator-scripts\") pod \"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83\" (UID: \"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83\") " Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.224700 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sp4j\" (UniqueName: \"kubernetes.io/projected/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-kube-api-access-7sp4j\") pod \"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83\" (UID: \"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83\") " Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.226434 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83" (UID: "3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.231573 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-kube-api-access-7sp4j" (OuterVolumeSpecName: "kube-api-access-7sp4j") pod "3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83" (UID: "3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83"). InnerVolumeSpecName "kube-api-access-7sp4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.328144 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.328210 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sp4j\" (UniqueName: \"kubernetes.io/projected/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83-kube-api-access-7sp4j\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.490224 4760 generic.go:334] "Generic (PLEG): container finished" podID="00d4b29c-f0c7-4d78-9db9-72e58e26360a" containerID="f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90" exitCode=0 Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.490290 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00d4b29c-f0c7-4d78-9db9-72e58e26360a","Type":"ContainerDied","Data":"f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90"} Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.493680 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nc2ws" Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.493720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nc2ws" event={"ID":"3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83","Type":"ContainerDied","Data":"58f7b1eaa7caf0dbdf3acceaad221937387987437d4f7ec65f26fc25f78602a8"} Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.493876 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58f7b1eaa7caf0dbdf3acceaad221937387987437d4f7ec65f26fc25f78602a8" Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.495797 4760 generic.go:334] "Generic (PLEG): container finished" podID="916c4314-f639-42ce-9c84-48c7b1c4df05" containerID="a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947" exitCode=0 Jan 23 18:20:29 crc kubenswrapper[4760]: I0123 18:20:29.495877 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"916c4314-f639-42ce-9c84-48c7b1c4df05","Type":"ContainerDied","Data":"a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947"} Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.302045 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2wpph" podUID="ea0533e4-88c1-4a03-93a9-f0680acaafc5" containerName="ovn-controller" probeResult="failure" output=< Jan 23 18:20:30 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 18:20:30 crc kubenswrapper[4760]: > Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.350741 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7lf5j" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.563437 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2wpph-config-qxjtf"] Jan 23 18:20:30 crc kubenswrapper[4760]: E0123 18:20:30.563828 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83" containerName="mariadb-account-create-update" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.563841 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83" containerName="mariadb-account-create-update" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.564007 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83" containerName="mariadb-account-create-update" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.564584 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.567534 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.594464 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2wpph-config-qxjtf"] Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.648680 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-scripts\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.649043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.649097 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnh7m\" (UniqueName: \"kubernetes.io/projected/14646ad6-a565-4143-933c-060b08c8100c-kube-api-access-gnh7m\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.649147 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-additional-scripts\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.649276 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-log-ovn\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.649347 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run-ovn\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.751014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-scripts\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.751384 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.751566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnh7m\" (UniqueName: \"kubernetes.io/projected/14646ad6-a565-4143-933c-060b08c8100c-kube-api-access-gnh7m\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.751718 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-additional-scripts\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.751839 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-log-ovn\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.751942 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run-ovn\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.752020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-log-ovn\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.752057 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run-ovn\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.751725 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.752281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-additional-scripts\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.753474 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-scripts\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.771872 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnh7m\" (UniqueName: \"kubernetes.io/projected/14646ad6-a565-4143-933c-060b08c8100c-kube-api-access-gnh7m\") pod \"ovn-controller-2wpph-config-qxjtf\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:30 crc kubenswrapper[4760]: I0123 18:20:30.969263 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:31 crc kubenswrapper[4760]: W0123 18:20:31.449984 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14646ad6_a565_4143_933c_060b08c8100c.slice/crio-87f82ad69a52df594de5f607038fa3c5a47d399a6d2355bc6bebed471226bf9a WatchSource:0}: Error finding container 87f82ad69a52df594de5f607038fa3c5a47d399a6d2355bc6bebed471226bf9a: Status 404 returned error can't find the container with id 87f82ad69a52df594de5f607038fa3c5a47d399a6d2355bc6bebed471226bf9a Jan 23 18:20:31 crc kubenswrapper[4760]: I0123 18:20:31.452252 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2wpph-config-qxjtf"] Jan 23 18:20:31 crc kubenswrapper[4760]: I0123 18:20:31.525615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"916c4314-f639-42ce-9c84-48c7b1c4df05","Type":"ContainerStarted","Data":"2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982"} Jan 23 18:20:31 crc kubenswrapper[4760]: I0123 18:20:31.526370 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:20:31 crc kubenswrapper[4760]: I0123 18:20:31.534030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p4m6p" event={"ID":"75c11195-65a7-41d7-857c-15a8962cd2e3","Type":"ContainerStarted","Data":"02bcafddb1fee8e5a9e9710afeec10beef31a206954b5230e59c1ad6a5fc73c1"} Jan 23 18:20:31 crc kubenswrapper[4760]: I0123 18:20:31.540107 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00d4b29c-f0c7-4d78-9db9-72e58e26360a","Type":"ContainerStarted","Data":"56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3"} Jan 23 18:20:31 crc kubenswrapper[4760]: I0123 18:20:31.541491 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 18:20:31 crc kubenswrapper[4760]: I0123 18:20:31.542693 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2wpph-config-qxjtf" event={"ID":"14646ad6-a565-4143-933c-060b08c8100c","Type":"ContainerStarted","Data":"87f82ad69a52df594de5f607038fa3c5a47d399a6d2355bc6bebed471226bf9a"} Jan 23 18:20:31 crc kubenswrapper[4760]: I0123 18:20:31.554218 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.973471441 podStartE2EDuration="1m1.554199614s" podCreationTimestamp="2026-01-23 18:19:30 +0000 UTC" firstStartedPulling="2026-01-23 18:19:46.225301994 +0000 UTC m=+1129.227759927" lastFinishedPulling="2026-01-23 18:19:52.806030167 +0000 UTC m=+1135.808488100" observedRunningTime="2026-01-23 18:20:31.548005271 +0000 UTC m=+1174.550463214" watchObservedRunningTime="2026-01-23 18:20:31.554199614 +0000 UTC m=+1174.556657547" Jan 23 18:20:31 crc kubenswrapper[4760]: I0123 18:20:31.571960 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-p4m6p" podStartSLOduration=3.588390126 podStartE2EDuration="16.571940115s" podCreationTimestamp="2026-01-23 18:20:15 +0000 UTC" firstStartedPulling="2026-01-23 18:20:16.221008707 +0000 UTC m=+1159.223466640" lastFinishedPulling="2026-01-23 18:20:29.204558686 +0000 UTC m=+1172.207016629" observedRunningTime="2026-01-23 18:20:31.565804835 +0000 UTC m=+1174.568262768" watchObservedRunningTime="2026-01-23 18:20:31.571940115 +0000 UTC m=+1174.574398048" Jan 23 18:20:31 crc kubenswrapper[4760]: I0123 18:20:31.593755 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.455713198 podStartE2EDuration="1m2.59373599s" podCreationTimestamp="2026-01-23 18:19:29 +0000 UTC" firstStartedPulling="2026-01-23 18:19:45.787681834 +0000 UTC m=+1128.790139767" lastFinishedPulling="2026-01-23 18:19:52.925704626 +0000 UTC m=+1135.928162559" observedRunningTime="2026-01-23 18:20:31.589552774 +0000 UTC m=+1174.592010727" watchObservedRunningTime="2026-01-23 18:20:31.59373599 +0000 UTC m=+1174.596193943" Jan 23 18:20:32 crc kubenswrapper[4760]: I0123 18:20:32.552670 4760 generic.go:334] "Generic (PLEG): container finished" podID="14646ad6-a565-4143-933c-060b08c8100c" containerID="ba9c77afa0c4cb498713a8f54feaaf670cee6749d8b28e318f62738626d23944" exitCode=0 Jan 23 18:20:32 crc kubenswrapper[4760]: I0123 18:20:32.555164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2wpph-config-qxjtf" event={"ID":"14646ad6-a565-4143-933c-060b08c8100c","Type":"ContainerDied","Data":"ba9c77afa0c4cb498713a8f54feaaf670cee6749d8b28e318f62738626d23944"} Jan 23 18:20:33 crc kubenswrapper[4760]: I0123 18:20:33.398666 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nc2ws"] Jan 23 18:20:33 crc kubenswrapper[4760]: I0123 18:20:33.406868 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nc2ws"] Jan 23 18:20:33 crc kubenswrapper[4760]: I0123 18:20:33.605534 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83" path="/var/lib/kubelet/pods/3bf7aac3-424b-4b13-b3f1-1fa7aa4fae83/volumes" Jan 23 18:20:33 crc kubenswrapper[4760]: I0123 18:20:33.869767 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.014770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run\") pod \"14646ad6-a565-4143-933c-060b08c8100c\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.014873 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-log-ovn\") pod \"14646ad6-a565-4143-933c-060b08c8100c\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.014874 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run" (OuterVolumeSpecName: "var-run") pod "14646ad6-a565-4143-933c-060b08c8100c" (UID: "14646ad6-a565-4143-933c-060b08c8100c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.014989 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnh7m\" (UniqueName: \"kubernetes.io/projected/14646ad6-a565-4143-933c-060b08c8100c-kube-api-access-gnh7m\") pod \"14646ad6-a565-4143-933c-060b08c8100c\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.015002 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "14646ad6-a565-4143-933c-060b08c8100c" (UID: "14646ad6-a565-4143-933c-060b08c8100c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.015147 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-additional-scripts\") pod \"14646ad6-a565-4143-933c-060b08c8100c\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.016022 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "14646ad6-a565-4143-933c-060b08c8100c" (UID: "14646ad6-a565-4143-933c-060b08c8100c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.016166 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-scripts\") pod \"14646ad6-a565-4143-933c-060b08c8100c\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.017645 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-scripts" (OuterVolumeSpecName: "scripts") pod "14646ad6-a565-4143-933c-060b08c8100c" (UID: "14646ad6-a565-4143-933c-060b08c8100c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.017832 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run-ovn\") pod \"14646ad6-a565-4143-933c-060b08c8100c\" (UID: \"14646ad6-a565-4143-933c-060b08c8100c\") " Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.017904 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "14646ad6-a565-4143-933c-060b08c8100c" (UID: "14646ad6-a565-4143-933c-060b08c8100c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.018618 4760 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.018659 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14646ad6-a565-4143-933c-060b08c8100c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.018677 4760 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.018695 4760 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.018712 4760 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14646ad6-a565-4143-933c-060b08c8100c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.029621 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14646ad6-a565-4143-933c-060b08c8100c-kube-api-access-gnh7m" (OuterVolumeSpecName: "kube-api-access-gnh7m") pod "14646ad6-a565-4143-933c-060b08c8100c" (UID: "14646ad6-a565-4143-933c-060b08c8100c"). InnerVolumeSpecName "kube-api-access-gnh7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.120186 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnh7m\" (UniqueName: \"kubernetes.io/projected/14646ad6-a565-4143-933c-060b08c8100c-kube-api-access-gnh7m\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.573163 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2wpph-config-qxjtf" event={"ID":"14646ad6-a565-4143-933c-060b08c8100c","Type":"ContainerDied","Data":"87f82ad69a52df594de5f607038fa3c5a47d399a6d2355bc6bebed471226bf9a"} Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.573661 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f82ad69a52df594de5f607038fa3c5a47d399a6d2355bc6bebed471226bf9a" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.573269 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2wpph-config-qxjtf" Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.978491 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2wpph-config-qxjtf"] Jan 23 18:20:34 crc kubenswrapper[4760]: I0123 18:20:34.988473 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2wpph-config-qxjtf"] Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.072487 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2wpph-config-9qbb4"] Jan 23 18:20:35 crc kubenswrapper[4760]: E0123 18:20:35.072920 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14646ad6-a565-4143-933c-060b08c8100c" containerName="ovn-config" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.072945 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="14646ad6-a565-4143-933c-060b08c8100c" containerName="ovn-config" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.073373 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="14646ad6-a565-4143-933c-060b08c8100c" containerName="ovn-config" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.074395 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.077076 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.085170 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2wpph-config-9qbb4"] Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.137900 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-log-ovn\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.137976 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szz77\" (UniqueName: \"kubernetes.io/projected/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-kube-api-access-szz77\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.138003 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.138074 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run-ovn\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.138278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-scripts\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.138384 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-additional-scripts\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.239696 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-scripts\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.239748 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-additional-scripts\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.239782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-log-ovn\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.239812 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szz77\" (UniqueName: \"kubernetes.io/projected/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-kube-api-access-szz77\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.239833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.239848 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run-ovn\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.240139 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run-ovn\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.240159 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.240177 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-log-ovn\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.240843 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-additional-scripts\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.242159 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-scripts\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.261036 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szz77\" (UniqueName: \"kubernetes.io/projected/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-kube-api-access-szz77\") pod \"ovn-controller-2wpph-config-9qbb4\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.300086 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-2wpph" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.401389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.606052 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14646ad6-a565-4143-933c-060b08c8100c" path="/var/lib/kubelet/pods/14646ad6-a565-4143-933c-060b08c8100c/volumes" Jan 23 18:20:35 crc kubenswrapper[4760]: I0123 18:20:35.856817 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2wpph-config-9qbb4"] Jan 23 18:20:35 crc kubenswrapper[4760]: W0123 18:20:35.860475 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ed845b7_dd0d_4e7e_8e94_eb8033868cef.slice/crio-94acacb46e78e2a7dbea87826f16653c2d0f7359b137c00473e7fcb2859a081c WatchSource:0}: Error finding container 94acacb46e78e2a7dbea87826f16653c2d0f7359b137c00473e7fcb2859a081c: Status 404 returned error can't find the container with id 94acacb46e78e2a7dbea87826f16653c2d0f7359b137c00473e7fcb2859a081c Jan 23 18:20:36 crc kubenswrapper[4760]: I0123 18:20:36.595482 4760 generic.go:334] "Generic (PLEG): container finished" podID="5ed845b7-dd0d-4e7e-8e94-eb8033868cef" containerID="968cde41f75ddc69b1039cf84a52a7ecd54d355cddf514647e95aa801d876f48" exitCode=0 Jan 23 18:20:36 crc kubenswrapper[4760]: I0123 18:20:36.595535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2wpph-config-9qbb4" event={"ID":"5ed845b7-dd0d-4e7e-8e94-eb8033868cef","Type":"ContainerDied","Data":"968cde41f75ddc69b1039cf84a52a7ecd54d355cddf514647e95aa801d876f48"} Jan 23 18:20:36 crc kubenswrapper[4760]: I0123 18:20:36.595562 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2wpph-config-9qbb4" event={"ID":"5ed845b7-dd0d-4e7e-8e94-eb8033868cef","Type":"ContainerStarted","Data":"94acacb46e78e2a7dbea87826f16653c2d0f7359b137c00473e7fcb2859a081c"} Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.612587 4760 generic.go:334] "Generic (PLEG): container finished" podID="75c11195-65a7-41d7-857c-15a8962cd2e3" containerID="02bcafddb1fee8e5a9e9710afeec10beef31a206954b5230e59c1ad6a5fc73c1" exitCode=0 Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.615462 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p4m6p" event={"ID":"75c11195-65a7-41d7-857c-15a8962cd2e3","Type":"ContainerDied","Data":"02bcafddb1fee8e5a9e9710afeec10beef31a206954b5230e59c1ad6a5fc73c1"} Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.908980 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.989434 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szz77\" (UniqueName: \"kubernetes.io/projected/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-kube-api-access-szz77\") pod \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.989795 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run-ovn\") pod \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.989820 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-log-ovn\") pod \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.989939 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-additional-scripts\") pod \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.989975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run\") pod \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.990061 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-scripts\") pod \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\" (UID: \"5ed845b7-dd0d-4e7e-8e94-eb8033868cef\") " Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.990160 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5ed845b7-dd0d-4e7e-8e94-eb8033868cef" (UID: "5ed845b7-dd0d-4e7e-8e94-eb8033868cef"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.990159 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5ed845b7-dd0d-4e7e-8e94-eb8033868cef" (UID: "5ed845b7-dd0d-4e7e-8e94-eb8033868cef"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.990207 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run" (OuterVolumeSpecName: "var-run") pod "5ed845b7-dd0d-4e7e-8e94-eb8033868cef" (UID: "5ed845b7-dd0d-4e7e-8e94-eb8033868cef"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.990430 4760 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.990450 4760 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.990460 4760 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.990661 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5ed845b7-dd0d-4e7e-8e94-eb8033868cef" (UID: "5ed845b7-dd0d-4e7e-8e94-eb8033868cef"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.991058 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-scripts" (OuterVolumeSpecName: "scripts") pod "5ed845b7-dd0d-4e7e-8e94-eb8033868cef" (UID: "5ed845b7-dd0d-4e7e-8e94-eb8033868cef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:37 crc kubenswrapper[4760]: I0123 18:20:37.995250 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-kube-api-access-szz77" (OuterVolumeSpecName: "kube-api-access-szz77") pod "5ed845b7-dd0d-4e7e-8e94-eb8033868cef" (UID: "5ed845b7-dd0d-4e7e-8e94-eb8033868cef"). InnerVolumeSpecName "kube-api-access-szz77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.091652 4760 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.091704 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.091724 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szz77\" (UniqueName: \"kubernetes.io/projected/5ed845b7-dd0d-4e7e-8e94-eb8033868cef-kube-api-access-szz77\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.422848 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d5x4m"] Jan 23 18:20:38 crc kubenswrapper[4760]: E0123 18:20:38.423226 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed845b7-dd0d-4e7e-8e94-eb8033868cef" containerName="ovn-config" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.423251 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed845b7-dd0d-4e7e-8e94-eb8033868cef" containerName="ovn-config" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.423452 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed845b7-dd0d-4e7e-8e94-eb8033868cef" containerName="ovn-config" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.424058 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d5x4m" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.427955 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.439955 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d5x4m"] Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.497469 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881750bc-c90b-4964-ae2a-9325359893cf-operator-scripts\") pod \"root-account-create-update-d5x4m\" (UID: \"881750bc-c90b-4964-ae2a-9325359893cf\") " pod="openstack/root-account-create-update-d5x4m" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.497575 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cskb5\" (UniqueName: \"kubernetes.io/projected/881750bc-c90b-4964-ae2a-9325359893cf-kube-api-access-cskb5\") pod \"root-account-create-update-d5x4m\" (UID: \"881750bc-c90b-4964-ae2a-9325359893cf\") " pod="openstack/root-account-create-update-d5x4m" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.599210 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cskb5\" (UniqueName: \"kubernetes.io/projected/881750bc-c90b-4964-ae2a-9325359893cf-kube-api-access-cskb5\") pod \"root-account-create-update-d5x4m\" (UID: \"881750bc-c90b-4964-ae2a-9325359893cf\") " pod="openstack/root-account-create-update-d5x4m" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.599352 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881750bc-c90b-4964-ae2a-9325359893cf-operator-scripts\") pod \"root-account-create-update-d5x4m\" (UID: \"881750bc-c90b-4964-ae2a-9325359893cf\") " pod="openstack/root-account-create-update-d5x4m" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.600082 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881750bc-c90b-4964-ae2a-9325359893cf-operator-scripts\") pod \"root-account-create-update-d5x4m\" (UID: \"881750bc-c90b-4964-ae2a-9325359893cf\") " pod="openstack/root-account-create-update-d5x4m" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.619072 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cskb5\" (UniqueName: \"kubernetes.io/projected/881750bc-c90b-4964-ae2a-9325359893cf-kube-api-access-cskb5\") pod \"root-account-create-update-d5x4m\" (UID: \"881750bc-c90b-4964-ae2a-9325359893cf\") " pod="openstack/root-account-create-update-d5x4m" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.624112 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2wpph-config-9qbb4" event={"ID":"5ed845b7-dd0d-4e7e-8e94-eb8033868cef","Type":"ContainerDied","Data":"94acacb46e78e2a7dbea87826f16653c2d0f7359b137c00473e7fcb2859a081c"} Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.624158 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2wpph-config-9qbb4" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.624176 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94acacb46e78e2a7dbea87826f16653c2d0f7359b137c00473e7fcb2859a081c" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.744339 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d5x4m" Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.989909 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2wpph-config-9qbb4"] Jan 23 18:20:38 crc kubenswrapper[4760]: I0123 18:20:38.995768 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2wpph-config-9qbb4"] Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.056131 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.226217 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-db-sync-config-data\") pod \"75c11195-65a7-41d7-857c-15a8962cd2e3\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.226323 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-combined-ca-bundle\") pod \"75c11195-65a7-41d7-857c-15a8962cd2e3\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.226402 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-config-data\") pod \"75c11195-65a7-41d7-857c-15a8962cd2e3\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.226462 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pknp\" (UniqueName: \"kubernetes.io/projected/75c11195-65a7-41d7-857c-15a8962cd2e3-kube-api-access-2pknp\") pod \"75c11195-65a7-41d7-857c-15a8962cd2e3\" (UID: \"75c11195-65a7-41d7-857c-15a8962cd2e3\") " Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.232401 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c11195-65a7-41d7-857c-15a8962cd2e3-kube-api-access-2pknp" (OuterVolumeSpecName: "kube-api-access-2pknp") pod "75c11195-65a7-41d7-857c-15a8962cd2e3" (UID: "75c11195-65a7-41d7-857c-15a8962cd2e3"). InnerVolumeSpecName "kube-api-access-2pknp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.234083 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "75c11195-65a7-41d7-857c-15a8962cd2e3" (UID: "75c11195-65a7-41d7-857c-15a8962cd2e3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.234339 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d5x4m"] Jan 23 18:20:39 crc kubenswrapper[4760]: W0123 18:20:39.243699 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod881750bc_c90b_4964_ae2a_9325359893cf.slice/crio-01eba2d58df25c146a368fb9e1242673a871a3615e1fd8f10aa8a750e8cd37cb WatchSource:0}: Error finding container 01eba2d58df25c146a368fb9e1242673a871a3615e1fd8f10aa8a750e8cd37cb: Status 404 returned error can't find the container with id 01eba2d58df25c146a368fb9e1242673a871a3615e1fd8f10aa8a750e8cd37cb Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.271022 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-config-data" (OuterVolumeSpecName: "config-data") pod "75c11195-65a7-41d7-857c-15a8962cd2e3" (UID: "75c11195-65a7-41d7-857c-15a8962cd2e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.282045 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75c11195-65a7-41d7-857c-15a8962cd2e3" (UID: "75c11195-65a7-41d7-857c-15a8962cd2e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.328050 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.328342 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pknp\" (UniqueName: \"kubernetes.io/projected/75c11195-65a7-41d7-857c-15a8962cd2e3-kube-api-access-2pknp\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.328353 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.328366 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c11195-65a7-41d7-857c-15a8962cd2e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.606291 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed845b7-dd0d-4e7e-8e94-eb8033868cef" path="/var/lib/kubelet/pods/5ed845b7-dd0d-4e7e-8e94-eb8033868cef/volumes" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.632597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d5x4m" event={"ID":"881750bc-c90b-4964-ae2a-9325359893cf","Type":"ContainerStarted","Data":"22483d07667daf1630b4320b6e6646212bca246cab7402a73085a9c9bfa89458"} Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.632659 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d5x4m" event={"ID":"881750bc-c90b-4964-ae2a-9325359893cf","Type":"ContainerStarted","Data":"01eba2d58df25c146a368fb9e1242673a871a3615e1fd8f10aa8a750e8cd37cb"} Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.634287 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p4m6p" event={"ID":"75c11195-65a7-41d7-857c-15a8962cd2e3","Type":"ContainerDied","Data":"4362f10c04fd859d741d3869ce893c332ed722fd3f7418f9f0da6bb2e2c0f927"} Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.634318 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4362f10c04fd859d741d3869ce893c332ed722fd3f7418f9f0da6bb2e2c0f927" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.634339 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p4m6p" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.649073 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-d5x4m" podStartSLOduration=1.649046667 podStartE2EDuration="1.649046667s" podCreationTimestamp="2026-01-23 18:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:39.646129416 +0000 UTC m=+1182.648587359" watchObservedRunningTime="2026-01-23 18:20:39.649046667 +0000 UTC m=+1182.651504610" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.892805 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4vklh"] Jan 23 18:20:39 crc kubenswrapper[4760]: E0123 18:20:39.893978 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c11195-65a7-41d7-857c-15a8962cd2e3" containerName="glance-db-sync" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.894073 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c11195-65a7-41d7-857c-15a8962cd2e3" containerName="glance-db-sync" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.894344 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c11195-65a7-41d7-857c-15a8962cd2e3" containerName="glance-db-sync" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.895273 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:39 crc kubenswrapper[4760]: I0123 18:20:39.907009 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4vklh"] Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.037768 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.038371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.038506 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sk5c\" (UniqueName: \"kubernetes.io/projected/904bcdd2-189f-4b64-9953-612341192088-kube-api-access-2sk5c\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.038620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-config\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.038767 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.140780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.140833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.140874 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sk5c\" (UniqueName: \"kubernetes.io/projected/904bcdd2-189f-4b64-9953-612341192088-kube-api-access-2sk5c\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.140915 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-config\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.140981 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.142244 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.142269 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.142247 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.143224 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-config\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.168215 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sk5c\" (UniqueName: \"kubernetes.io/projected/904bcdd2-189f-4b64-9953-612341192088-kube-api-access-2sk5c\") pod \"dnsmasq-dns-54f9b7b8d9-4vklh\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.215072 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.642667 4760 generic.go:334] "Generic (PLEG): container finished" podID="881750bc-c90b-4964-ae2a-9325359893cf" containerID="22483d07667daf1630b4320b6e6646212bca246cab7402a73085a9c9bfa89458" exitCode=0 Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.642716 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d5x4m" event={"ID":"881750bc-c90b-4964-ae2a-9325359893cf","Type":"ContainerDied","Data":"22483d07667daf1630b4320b6e6646212bca246cab7402a73085a9c9bfa89458"} Jan 23 18:20:40 crc kubenswrapper[4760]: I0123 18:20:40.676021 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4vklh"] Jan 23 18:20:40 crc kubenswrapper[4760]: W0123 18:20:40.680663 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod904bcdd2_189f_4b64_9953_612341192088.slice/crio-51f1052bc138d0f0887200127124d1a79cee0a5bdf1074f8d5b3e0b36ac5fde9 WatchSource:0}: Error finding container 51f1052bc138d0f0887200127124d1a79cee0a5bdf1074f8d5b3e0b36ac5fde9: Status 404 returned error can't find the container with id 51f1052bc138d0f0887200127124d1a79cee0a5bdf1074f8d5b3e0b36ac5fde9 Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.572587 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.637561 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.656801 4760 generic.go:334] "Generic (PLEG): container finished" podID="904bcdd2-189f-4b64-9953-612341192088" containerID="49eaff5ec8a7ff18c5db42e2fb32fb6ee2821ce355c78ee2d8d584151cccbb5b" exitCode=0 Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.658257 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" event={"ID":"904bcdd2-189f-4b64-9953-612341192088","Type":"ContainerDied","Data":"49eaff5ec8a7ff18c5db42e2fb32fb6ee2821ce355c78ee2d8d584151cccbb5b"} Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.658299 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" event={"ID":"904bcdd2-189f-4b64-9953-612341192088","Type":"ContainerStarted","Data":"51f1052bc138d0f0887200127124d1a79cee0a5bdf1074f8d5b3e0b36ac5fde9"} Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.850112 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-444kz"] Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.851690 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-444kz" Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.870140 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-444kz"] Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.984747 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f7e0-account-create-update-9wvbl"] Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.986378 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f7e0-account-create-update-9wvbl" Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.989656 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.990317 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f7e0-account-create-update-9wvbl"] Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.990976 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-operator-scripts\") pod \"barbican-db-create-444kz\" (UID: \"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de\") " pod="openstack/barbican-db-create-444kz" Jan 23 18:20:41 crc kubenswrapper[4760]: I0123 18:20:41.991060 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcmrj\" (UniqueName: \"kubernetes.io/projected/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-kube-api-access-wcmrj\") pod \"barbican-db-create-444kz\" (UID: \"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de\") " pod="openstack/barbican-db-create-444kz" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.059655 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rdpkb"] Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.060898 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rdpkb" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.070006 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rdpkb"] Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.092199 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c89e48-2282-4867-ac66-6eff2f352646-operator-scripts\") pod \"barbican-f7e0-account-create-update-9wvbl\" (UID: \"41c89e48-2282-4867-ac66-6eff2f352646\") " pod="openstack/barbican-f7e0-account-create-update-9wvbl" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.092234 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-operator-scripts\") pod \"barbican-db-create-444kz\" (UID: \"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de\") " pod="openstack/barbican-db-create-444kz" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.092273 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7qn\" (UniqueName: \"kubernetes.io/projected/41c89e48-2282-4867-ac66-6eff2f352646-kube-api-access-5t7qn\") pod \"barbican-f7e0-account-create-update-9wvbl\" (UID: \"41c89e48-2282-4867-ac66-6eff2f352646\") " pod="openstack/barbican-f7e0-account-create-update-9wvbl" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.092476 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcmrj\" (UniqueName: \"kubernetes.io/projected/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-kube-api-access-wcmrj\") pod \"barbican-db-create-444kz\" (UID: \"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de\") " pod="openstack/barbican-db-create-444kz" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.094079 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-operator-scripts\") pod \"barbican-db-create-444kz\" (UID: \"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de\") " pod="openstack/barbican-db-create-444kz" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.133565 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcmrj\" (UniqueName: \"kubernetes.io/projected/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-kube-api-access-wcmrj\") pod \"barbican-db-create-444kz\" (UID: \"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de\") " pod="openstack/barbican-db-create-444kz" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.193983 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-operator-scripts\") pod \"cinder-db-create-rdpkb\" (UID: \"771accfa-3cd3-46ff-8b06-4ffa90c42a6b\") " pod="openstack/cinder-db-create-rdpkb" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.194058 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj8rt\" (UniqueName: \"kubernetes.io/projected/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-kube-api-access-vj8rt\") pod \"cinder-db-create-rdpkb\" (UID: \"771accfa-3cd3-46ff-8b06-4ffa90c42a6b\") " pod="openstack/cinder-db-create-rdpkb" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.194458 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c89e48-2282-4867-ac66-6eff2f352646-operator-scripts\") pod \"barbican-f7e0-account-create-update-9wvbl\" (UID: \"41c89e48-2282-4867-ac66-6eff2f352646\") " pod="openstack/barbican-f7e0-account-create-update-9wvbl" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.194562 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7qn\" (UniqueName: \"kubernetes.io/projected/41c89e48-2282-4867-ac66-6eff2f352646-kube-api-access-5t7qn\") pod \"barbican-f7e0-account-create-update-9wvbl\" (UID: \"41c89e48-2282-4867-ac66-6eff2f352646\") " pod="openstack/barbican-f7e0-account-create-update-9wvbl" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.195301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c89e48-2282-4867-ac66-6eff2f352646-operator-scripts\") pod \"barbican-f7e0-account-create-update-9wvbl\" (UID: \"41c89e48-2282-4867-ac66-6eff2f352646\") " pod="openstack/barbican-f7e0-account-create-update-9wvbl" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.198125 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d5x4m" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.236807 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7qn\" (UniqueName: \"kubernetes.io/projected/41c89e48-2282-4867-ac66-6eff2f352646-kube-api-access-5t7qn\") pod \"barbican-f7e0-account-create-update-9wvbl\" (UID: \"41c89e48-2282-4867-ac66-6eff2f352646\") " pod="openstack/barbican-f7e0-account-create-update-9wvbl" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.249095 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-444kz" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.261480 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0a4f-account-create-update-5htpx"] Jan 23 18:20:42 crc kubenswrapper[4760]: E0123 18:20:42.261945 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881750bc-c90b-4964-ae2a-9325359893cf" containerName="mariadb-account-create-update" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.261967 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="881750bc-c90b-4964-ae2a-9325359893cf" containerName="mariadb-account-create-update" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.262170 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="881750bc-c90b-4964-ae2a-9325359893cf" containerName="mariadb-account-create-update" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.262891 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0a4f-account-create-update-5htpx" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.266233 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.281398 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0a4f-account-create-update-5htpx"] Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.296129 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881750bc-c90b-4964-ae2a-9325359893cf-operator-scripts\") pod \"881750bc-c90b-4964-ae2a-9325359893cf\" (UID: \"881750bc-c90b-4964-ae2a-9325359893cf\") " Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.296244 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cskb5\" (UniqueName: \"kubernetes.io/projected/881750bc-c90b-4964-ae2a-9325359893cf-kube-api-access-cskb5\") pod \"881750bc-c90b-4964-ae2a-9325359893cf\" (UID: \"881750bc-c90b-4964-ae2a-9325359893cf\") " Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.296590 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj8rt\" (UniqueName: \"kubernetes.io/projected/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-kube-api-access-vj8rt\") pod \"cinder-db-create-rdpkb\" (UID: \"771accfa-3cd3-46ff-8b06-4ffa90c42a6b\") " pod="openstack/cinder-db-create-rdpkb" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.296702 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-operator-scripts\") pod \"cinder-db-create-rdpkb\" (UID: \"771accfa-3cd3-46ff-8b06-4ffa90c42a6b\") " pod="openstack/cinder-db-create-rdpkb" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.297416 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-operator-scripts\") pod \"cinder-db-create-rdpkb\" (UID: \"771accfa-3cd3-46ff-8b06-4ffa90c42a6b\") " pod="openstack/cinder-db-create-rdpkb" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.298052 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/881750bc-c90b-4964-ae2a-9325359893cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "881750bc-c90b-4964-ae2a-9325359893cf" (UID: "881750bc-c90b-4964-ae2a-9325359893cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.306330 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881750bc-c90b-4964-ae2a-9325359893cf-kube-api-access-cskb5" (OuterVolumeSpecName: "kube-api-access-cskb5") pod "881750bc-c90b-4964-ae2a-9325359893cf" (UID: "881750bc-c90b-4964-ae2a-9325359893cf"). InnerVolumeSpecName "kube-api-access-cskb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.322658 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f7e0-account-create-update-9wvbl" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.330600 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-htdjc"] Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.331540 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.335577 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj8rt\" (UniqueName: \"kubernetes.io/projected/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-kube-api-access-vj8rt\") pod \"cinder-db-create-rdpkb\" (UID: \"771accfa-3cd3-46ff-8b06-4ffa90c42a6b\") " pod="openstack/cinder-db-create-rdpkb" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.337063 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.337276 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.343193 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.343957 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9hcxg" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.354502 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-htdjc"] Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.398154 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll9hm\" (UniqueName: \"kubernetes.io/projected/668368d8-8de8-44fa-bf3b-79308dd8e44b-kube-api-access-ll9hm\") pod \"keystone-db-sync-htdjc\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.398493 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-combined-ca-bundle\") pod \"keystone-db-sync-htdjc\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.398548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9qf\" (UniqueName: \"kubernetes.io/projected/ddae024a-2888-40f3-954d-f0da9731f77d-kube-api-access-4k9qf\") pod \"cinder-0a4f-account-create-update-5htpx\" (UID: \"ddae024a-2888-40f3-954d-f0da9731f77d\") " pod="openstack/cinder-0a4f-account-create-update-5htpx" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.398573 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddae024a-2888-40f3-954d-f0da9731f77d-operator-scripts\") pod \"cinder-0a4f-account-create-update-5htpx\" (UID: \"ddae024a-2888-40f3-954d-f0da9731f77d\") " pod="openstack/cinder-0a4f-account-create-update-5htpx" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.398603 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-config-data\") pod \"keystone-db-sync-htdjc\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.398663 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881750bc-c90b-4964-ae2a-9325359893cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.398673 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cskb5\" (UniqueName: \"kubernetes.io/projected/881750bc-c90b-4964-ae2a-9325359893cf-kube-api-access-cskb5\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.432704 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5pl85"] Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.433827 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5pl85" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.450028 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5pl85"] Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.494665 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rdpkb" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.499984 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddae024a-2888-40f3-954d-f0da9731f77d-operator-scripts\") pod \"cinder-0a4f-account-create-update-5htpx\" (UID: \"ddae024a-2888-40f3-954d-f0da9731f77d\") " pod="openstack/cinder-0a4f-account-create-update-5htpx" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.500043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-config-data\") pod \"keystone-db-sync-htdjc\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.500110 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-operator-scripts\") pod \"neutron-db-create-5pl85\" (UID: \"f74cdf76-d5cc-404a-a5d8-e6e3a3add887\") " pod="openstack/neutron-db-create-5pl85" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.500153 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll9hm\" (UniqueName: \"kubernetes.io/projected/668368d8-8de8-44fa-bf3b-79308dd8e44b-kube-api-access-ll9hm\") pod \"keystone-db-sync-htdjc\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.500195 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-combined-ca-bundle\") pod \"keystone-db-sync-htdjc\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.500255 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxqqb\" (UniqueName: \"kubernetes.io/projected/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-kube-api-access-pxqqb\") pod \"neutron-db-create-5pl85\" (UID: \"f74cdf76-d5cc-404a-a5d8-e6e3a3add887\") " pod="openstack/neutron-db-create-5pl85" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.500289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k9qf\" (UniqueName: \"kubernetes.io/projected/ddae024a-2888-40f3-954d-f0da9731f77d-kube-api-access-4k9qf\") pod \"cinder-0a4f-account-create-update-5htpx\" (UID: \"ddae024a-2888-40f3-954d-f0da9731f77d\") " pod="openstack/cinder-0a4f-account-create-update-5htpx" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.501388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddae024a-2888-40f3-954d-f0da9731f77d-operator-scripts\") pod \"cinder-0a4f-account-create-update-5htpx\" (UID: \"ddae024a-2888-40f3-954d-f0da9731f77d\") " pod="openstack/cinder-0a4f-account-create-update-5htpx" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.507945 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-combined-ca-bundle\") pod \"keystone-db-sync-htdjc\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.508487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-config-data\") pod \"keystone-db-sync-htdjc\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.519164 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k9qf\" (UniqueName: \"kubernetes.io/projected/ddae024a-2888-40f3-954d-f0da9731f77d-kube-api-access-4k9qf\") pod \"cinder-0a4f-account-create-update-5htpx\" (UID: \"ddae024a-2888-40f3-954d-f0da9731f77d\") " pod="openstack/cinder-0a4f-account-create-update-5htpx" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.519727 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll9hm\" (UniqueName: \"kubernetes.io/projected/668368d8-8de8-44fa-bf3b-79308dd8e44b-kube-api-access-ll9hm\") pod \"keystone-db-sync-htdjc\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.548579 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-301c-account-create-update-hfbkp"] Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.550951 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-301c-account-create-update-hfbkp" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.556802 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.563796 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-301c-account-create-update-hfbkp"] Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.602551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qthh2\" (UniqueName: \"kubernetes.io/projected/3d70bae9-29c6-4259-98cc-6398f6b472a9-kube-api-access-qthh2\") pod \"neutron-301c-account-create-update-hfbkp\" (UID: \"3d70bae9-29c6-4259-98cc-6398f6b472a9\") " pod="openstack/neutron-301c-account-create-update-hfbkp" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.602627 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-operator-scripts\") pod \"neutron-db-create-5pl85\" (UID: \"f74cdf76-d5cc-404a-a5d8-e6e3a3add887\") " pod="openstack/neutron-db-create-5pl85" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.602933 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxqqb\" (UniqueName: \"kubernetes.io/projected/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-kube-api-access-pxqqb\") pod \"neutron-db-create-5pl85\" (UID: \"f74cdf76-d5cc-404a-a5d8-e6e3a3add887\") " pod="openstack/neutron-db-create-5pl85" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.603049 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d70bae9-29c6-4259-98cc-6398f6b472a9-operator-scripts\") pod \"neutron-301c-account-create-update-hfbkp\" (UID: \"3d70bae9-29c6-4259-98cc-6398f6b472a9\") " pod="openstack/neutron-301c-account-create-update-hfbkp" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.603539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-operator-scripts\") pod \"neutron-db-create-5pl85\" (UID: \"f74cdf76-d5cc-404a-a5d8-e6e3a3add887\") " pod="openstack/neutron-db-create-5pl85" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.622600 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0a4f-account-create-update-5htpx" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.624146 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxqqb\" (UniqueName: \"kubernetes.io/projected/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-kube-api-access-pxqqb\") pod \"neutron-db-create-5pl85\" (UID: \"f74cdf76-d5cc-404a-a5d8-e6e3a3add887\") " pod="openstack/neutron-db-create-5pl85" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.655656 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.672114 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d5x4m" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.672632 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d5x4m" event={"ID":"881750bc-c90b-4964-ae2a-9325359893cf","Type":"ContainerDied","Data":"01eba2d58df25c146a368fb9e1242673a871a3615e1fd8f10aa8a750e8cd37cb"} Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.672707 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01eba2d58df25c146a368fb9e1242673a871a3615e1fd8f10aa8a750e8cd37cb" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.691131 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" event={"ID":"904bcdd2-189f-4b64-9953-612341192088","Type":"ContainerStarted","Data":"01e490445413872c436809c30f5371018345c884c451493a52ae537253d573df"} Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.692637 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.710372 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qthh2\" (UniqueName: \"kubernetes.io/projected/3d70bae9-29c6-4259-98cc-6398f6b472a9-kube-api-access-qthh2\") pod \"neutron-301c-account-create-update-hfbkp\" (UID: \"3d70bae9-29c6-4259-98cc-6398f6b472a9\") " pod="openstack/neutron-301c-account-create-update-hfbkp" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.710643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d70bae9-29c6-4259-98cc-6398f6b472a9-operator-scripts\") pod \"neutron-301c-account-create-update-hfbkp\" (UID: \"3d70bae9-29c6-4259-98cc-6398f6b472a9\") " pod="openstack/neutron-301c-account-create-update-hfbkp" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.711722 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d70bae9-29c6-4259-98cc-6398f6b472a9-operator-scripts\") pod \"neutron-301c-account-create-update-hfbkp\" (UID: \"3d70bae9-29c6-4259-98cc-6398f6b472a9\") " pod="openstack/neutron-301c-account-create-update-hfbkp" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.731183 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" podStartSLOduration=3.731164792 podStartE2EDuration="3.731164792s" podCreationTimestamp="2026-01-23 18:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:42.72927888 +0000 UTC m=+1185.731736813" watchObservedRunningTime="2026-01-23 18:20:42.731164792 +0000 UTC m=+1185.733622715" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.736568 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qthh2\" (UniqueName: \"kubernetes.io/projected/3d70bae9-29c6-4259-98cc-6398f6b472a9-kube-api-access-qthh2\") pod \"neutron-301c-account-create-update-hfbkp\" (UID: \"3d70bae9-29c6-4259-98cc-6398f6b472a9\") " pod="openstack/neutron-301c-account-create-update-hfbkp" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.766035 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5pl85" Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.812200 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-444kz"] Jan 23 18:20:42 crc kubenswrapper[4760]: I0123 18:20:42.877591 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-301c-account-create-update-hfbkp" Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.003574 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f7e0-account-create-update-9wvbl"] Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.140362 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rdpkb"] Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.201145 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-htdjc"] Jan 23 18:20:43 crc kubenswrapper[4760]: W0123 18:20:43.205662 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod668368d8_8de8_44fa_bf3b_79308dd8e44b.slice/crio-a6fb60b86abd9bc9b12d2dfdfdbf5bf737cdf6aa47fa4d9e4ef6be3c0bd46270 WatchSource:0}: Error finding container a6fb60b86abd9bc9b12d2dfdfdbf5bf737cdf6aa47fa4d9e4ef6be3c0bd46270: Status 404 returned error can't find the container with id a6fb60b86abd9bc9b12d2dfdfdbf5bf737cdf6aa47fa4d9e4ef6be3c0bd46270 Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.315236 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0a4f-account-create-update-5htpx"] Jan 23 18:20:43 crc kubenswrapper[4760]: W0123 18:20:43.582865 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf74cdf76_d5cc_404a_a5d8_e6e3a3add887.slice/crio-36dd5bee27a789fda8e976a13df28734af851cbda5f7a26e043f90fdd56e7753 WatchSource:0}: Error finding container 36dd5bee27a789fda8e976a13df28734af851cbda5f7a26e043f90fdd56e7753: Status 404 returned error can't find the container with id 36dd5bee27a789fda8e976a13df28734af851cbda5f7a26e043f90fdd56e7753 Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.587087 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5pl85"] Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.726044 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f7e0-account-create-update-9wvbl" event={"ID":"41c89e48-2282-4867-ac66-6eff2f352646","Type":"ContainerStarted","Data":"c82ffdeae8900d58bab4386f275b80a91e2ba500f436796987f510d6c00c82ac"} Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.726091 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f7e0-account-create-update-9wvbl" event={"ID":"41c89e48-2282-4867-ac66-6eff2f352646","Type":"ContainerStarted","Data":"8d6b6ada179d01a3c2f50d137ba87210fc2bbd6605d0b9b83474054159977767"} Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.739054 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-444kz" event={"ID":"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de","Type":"ContainerStarted","Data":"06d9ab88b32495d2d1c9d836e86f5061754d422665555c45f5cb023fecd4b53e"} Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.739086 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-444kz" event={"ID":"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de","Type":"ContainerStarted","Data":"75a1f8564450c08a54fdd7f206644316de84cb3cbb9fd08d2bec90c63f1c0fcc"} Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.750277 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-f7e0-account-create-update-9wvbl" podStartSLOduration=2.7502593810000002 podStartE2EDuration="2.750259381s" podCreationTimestamp="2026-01-23 18:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:43.74806926 +0000 UTC m=+1186.750527193" watchObservedRunningTime="2026-01-23 18:20:43.750259381 +0000 UTC m=+1186.752717324" Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.750943 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0a4f-account-create-update-5htpx" event={"ID":"ddae024a-2888-40f3-954d-f0da9731f77d","Type":"ContainerStarted","Data":"36b9b90391a1a240524536e688e8d6e9195005bfd866fbc10a6d1b577f4558e1"} Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.751042 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0a4f-account-create-update-5htpx" event={"ID":"ddae024a-2888-40f3-954d-f0da9731f77d","Type":"ContainerStarted","Data":"da08ec3b5c5acf11ad56699f58118a6b3540839a1e91b4cebb2efc2985dc2136"} Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.760508 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rdpkb" event={"ID":"771accfa-3cd3-46ff-8b06-4ffa90c42a6b","Type":"ContainerStarted","Data":"1e44164f06c433d58d0fea0aec722b5d3b0a04d9468d2f2ef8703fa7f363c462"} Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.760753 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rdpkb" event={"ID":"771accfa-3cd3-46ff-8b06-4ffa90c42a6b","Type":"ContainerStarted","Data":"399a382ab100f86b5a2af8ec86bbbdceb89feec4848d1ca7da04ebc7f84bbd64"} Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.762901 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-htdjc" event={"ID":"668368d8-8de8-44fa-bf3b-79308dd8e44b","Type":"ContainerStarted","Data":"a6fb60b86abd9bc9b12d2dfdfdbf5bf737cdf6aa47fa4d9e4ef6be3c0bd46270"} Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.771885 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-444kz" podStartSLOduration=2.77187137 podStartE2EDuration="2.77187137s" podCreationTimestamp="2026-01-23 18:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:43.763441387 +0000 UTC m=+1186.765899320" watchObservedRunningTime="2026-01-23 18:20:43.77187137 +0000 UTC m=+1186.774329303" Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.775901 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5pl85" event={"ID":"f74cdf76-d5cc-404a-a5d8-e6e3a3add887","Type":"ContainerStarted","Data":"36dd5bee27a789fda8e976a13df28734af851cbda5f7a26e043f90fdd56e7753"} Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.786917 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-rdpkb" podStartSLOduration=1.786901677 podStartE2EDuration="1.786901677s" podCreationTimestamp="2026-01-23 18:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:43.78376175 +0000 UTC m=+1186.786219683" watchObservedRunningTime="2026-01-23 18:20:43.786901677 +0000 UTC m=+1186.789359610" Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.805107 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-301c-account-create-update-hfbkp"] Jan 23 18:20:43 crc kubenswrapper[4760]: I0123 18:20:43.818921 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0a4f-account-create-update-5htpx" podStartSLOduration=1.818901095 podStartE2EDuration="1.818901095s" podCreationTimestamp="2026-01-23 18:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:43.813877175 +0000 UTC m=+1186.816335098" watchObservedRunningTime="2026-01-23 18:20:43.818901095 +0000 UTC m=+1186.821359028" Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.783003 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddae024a-2888-40f3-954d-f0da9731f77d" containerID="36b9b90391a1a240524536e688e8d6e9195005bfd866fbc10a6d1b577f4558e1" exitCode=0 Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.783094 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0a4f-account-create-update-5htpx" event={"ID":"ddae024a-2888-40f3-954d-f0da9731f77d","Type":"ContainerDied","Data":"36b9b90391a1a240524536e688e8d6e9195005bfd866fbc10a6d1b577f4558e1"} Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.786113 4760 generic.go:334] "Generic (PLEG): container finished" podID="771accfa-3cd3-46ff-8b06-4ffa90c42a6b" containerID="1e44164f06c433d58d0fea0aec722b5d3b0a04d9468d2f2ef8703fa7f363c462" exitCode=0 Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.786165 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rdpkb" event={"ID":"771accfa-3cd3-46ff-8b06-4ffa90c42a6b","Type":"ContainerDied","Data":"1e44164f06c433d58d0fea0aec722b5d3b0a04d9468d2f2ef8703fa7f363c462"} Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.787606 4760 generic.go:334] "Generic (PLEG): container finished" podID="f74cdf76-d5cc-404a-a5d8-e6e3a3add887" containerID="628b4fb4bed4e0e78ea053b47269a57566b2dd5519908e1601961a86c539f4ed" exitCode=0 Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.787671 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5pl85" event={"ID":"f74cdf76-d5cc-404a-a5d8-e6e3a3add887","Type":"ContainerDied","Data":"628b4fb4bed4e0e78ea053b47269a57566b2dd5519908e1601961a86c539f4ed"} Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.789149 4760 generic.go:334] "Generic (PLEG): container finished" podID="41c89e48-2282-4867-ac66-6eff2f352646" containerID="c82ffdeae8900d58bab4386f275b80a91e2ba500f436796987f510d6c00c82ac" exitCode=0 Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.789202 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f7e0-account-create-update-9wvbl" event={"ID":"41c89e48-2282-4867-ac66-6eff2f352646","Type":"ContainerDied","Data":"c82ffdeae8900d58bab4386f275b80a91e2ba500f436796987f510d6c00c82ac"} Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.790700 4760 generic.go:334] "Generic (PLEG): container finished" podID="cbff3f04-bb64-4735-bbaa-ea70fcb6f4de" containerID="06d9ab88b32495d2d1c9d836e86f5061754d422665555c45f5cb023fecd4b53e" exitCode=0 Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.790742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-444kz" event={"ID":"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de","Type":"ContainerDied","Data":"06d9ab88b32495d2d1c9d836e86f5061754d422665555c45f5cb023fecd4b53e"} Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.792217 4760 generic.go:334] "Generic (PLEG): container finished" podID="3d70bae9-29c6-4259-98cc-6398f6b472a9" containerID="f3925a5c1564d0af1dbfa21623a61d64a939f8412ed63961e42e98ac58d6c863" exitCode=0 Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.792245 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-301c-account-create-update-hfbkp" event={"ID":"3d70bae9-29c6-4259-98cc-6398f6b472a9","Type":"ContainerDied","Data":"f3925a5c1564d0af1dbfa21623a61d64a939f8412ed63961e42e98ac58d6c863"} Jan 23 18:20:44 crc kubenswrapper[4760]: I0123 18:20:44.792262 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-301c-account-create-update-hfbkp" event={"ID":"3d70bae9-29c6-4259-98cc-6398f6b472a9","Type":"ContainerStarted","Data":"bca207fd763dc66af035dd9d7de07df721b84f90d92a562b51d0786d23669b09"} Jan 23 18:20:46 crc kubenswrapper[4760]: I0123 18:20:46.075753 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:20:46 crc kubenswrapper[4760]: I0123 18:20:46.076061 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.822781 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rdpkb" event={"ID":"771accfa-3cd3-46ff-8b06-4ffa90c42a6b","Type":"ContainerDied","Data":"399a382ab100f86b5a2af8ec86bbbdceb89feec4848d1ca7da04ebc7f84bbd64"} Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.822844 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="399a382ab100f86b5a2af8ec86bbbdceb89feec4848d1ca7da04ebc7f84bbd64" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.824979 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0a4f-account-create-update-5htpx" event={"ID":"ddae024a-2888-40f3-954d-f0da9731f77d","Type":"ContainerDied","Data":"da08ec3b5c5acf11ad56699f58118a6b3540839a1e91b4cebb2efc2985dc2136"} Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.825006 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da08ec3b5c5acf11ad56699f58118a6b3540839a1e91b4cebb2efc2985dc2136" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.829805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5pl85" event={"ID":"f74cdf76-d5cc-404a-a5d8-e6e3a3add887","Type":"ContainerDied","Data":"36dd5bee27a789fda8e976a13df28734af851cbda5f7a26e043f90fdd56e7753"} Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.829852 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36dd5bee27a789fda8e976a13df28734af851cbda5f7a26e043f90fdd56e7753" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.831866 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f7e0-account-create-update-9wvbl" event={"ID":"41c89e48-2282-4867-ac66-6eff2f352646","Type":"ContainerDied","Data":"8d6b6ada179d01a3c2f50d137ba87210fc2bbd6605d0b9b83474054159977767"} Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.831902 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6b6ada179d01a3c2f50d137ba87210fc2bbd6605d0b9b83474054159977767" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.835876 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-444kz" event={"ID":"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de","Type":"ContainerDied","Data":"75a1f8564450c08a54fdd7f206644316de84cb3cbb9fd08d2bec90c63f1c0fcc"} Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.835952 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75a1f8564450c08a54fdd7f206644316de84cb3cbb9fd08d2bec90c63f1c0fcc" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.840755 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-301c-account-create-update-hfbkp" event={"ID":"3d70bae9-29c6-4259-98cc-6398f6b472a9","Type":"ContainerDied","Data":"bca207fd763dc66af035dd9d7de07df721b84f90d92a562b51d0786d23669b09"} Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.840784 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bca207fd763dc66af035dd9d7de07df721b84f90d92a562b51d0786d23669b09" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.840920 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-301c-account-create-update-hfbkp" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.888330 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rdpkb" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.914448 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qthh2\" (UniqueName: \"kubernetes.io/projected/3d70bae9-29c6-4259-98cc-6398f6b472a9-kube-api-access-qthh2\") pod \"3d70bae9-29c6-4259-98cc-6398f6b472a9\" (UID: \"3d70bae9-29c6-4259-98cc-6398f6b472a9\") " Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.914627 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d70bae9-29c6-4259-98cc-6398f6b472a9-operator-scripts\") pod \"3d70bae9-29c6-4259-98cc-6398f6b472a9\" (UID: \"3d70bae9-29c6-4259-98cc-6398f6b472a9\") " Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.915237 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d70bae9-29c6-4259-98cc-6398f6b472a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d70bae9-29c6-4259-98cc-6398f6b472a9" (UID: "3d70bae9-29c6-4259-98cc-6398f6b472a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.917025 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f7e0-account-create-update-9wvbl" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.921127 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d70bae9-29c6-4259-98cc-6398f6b472a9-kube-api-access-qthh2" (OuterVolumeSpecName: "kube-api-access-qthh2") pod "3d70bae9-29c6-4259-98cc-6398f6b472a9" (UID: "3d70bae9-29c6-4259-98cc-6398f6b472a9"). InnerVolumeSpecName "kube-api-access-qthh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.978362 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5pl85" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.984234 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0a4f-account-create-update-5htpx" Jan 23 18:20:47 crc kubenswrapper[4760]: I0123 18:20:47.989538 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-444kz" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.015771 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-operator-scripts\") pod \"771accfa-3cd3-46ff-8b06-4ffa90c42a6b\" (UID: \"771accfa-3cd3-46ff-8b06-4ffa90c42a6b\") " Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.015856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj8rt\" (UniqueName: \"kubernetes.io/projected/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-kube-api-access-vj8rt\") pod \"771accfa-3cd3-46ff-8b06-4ffa90c42a6b\" (UID: \"771accfa-3cd3-46ff-8b06-4ffa90c42a6b\") " Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.015909 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c89e48-2282-4867-ac66-6eff2f352646-operator-scripts\") pod \"41c89e48-2282-4867-ac66-6eff2f352646\" (UID: \"41c89e48-2282-4867-ac66-6eff2f352646\") " Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.015951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t7qn\" (UniqueName: \"kubernetes.io/projected/41c89e48-2282-4867-ac66-6eff2f352646-kube-api-access-5t7qn\") pod \"41c89e48-2282-4867-ac66-6eff2f352646\" (UID: \"41c89e48-2282-4867-ac66-6eff2f352646\") " Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.016385 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "771accfa-3cd3-46ff-8b06-4ffa90c42a6b" (UID: "771accfa-3cd3-46ff-8b06-4ffa90c42a6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.016667 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qthh2\" (UniqueName: \"kubernetes.io/projected/3d70bae9-29c6-4259-98cc-6398f6b472a9-kube-api-access-qthh2\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.016694 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.016708 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d70bae9-29c6-4259-98cc-6398f6b472a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.016884 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c89e48-2282-4867-ac66-6eff2f352646-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41c89e48-2282-4867-ac66-6eff2f352646" (UID: "41c89e48-2282-4867-ac66-6eff2f352646"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.019994 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-kube-api-access-vj8rt" (OuterVolumeSpecName: "kube-api-access-vj8rt") pod "771accfa-3cd3-46ff-8b06-4ffa90c42a6b" (UID: "771accfa-3cd3-46ff-8b06-4ffa90c42a6b"). InnerVolumeSpecName "kube-api-access-vj8rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.020783 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c89e48-2282-4867-ac66-6eff2f352646-kube-api-access-5t7qn" (OuterVolumeSpecName: "kube-api-access-5t7qn") pod "41c89e48-2282-4867-ac66-6eff2f352646" (UID: "41c89e48-2282-4867-ac66-6eff2f352646"). InnerVolumeSpecName "kube-api-access-5t7qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.117257 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-operator-scripts\") pod \"f74cdf76-d5cc-404a-a5d8-e6e3a3add887\" (UID: \"f74cdf76-d5cc-404a-a5d8-e6e3a3add887\") " Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.117386 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k9qf\" (UniqueName: \"kubernetes.io/projected/ddae024a-2888-40f3-954d-f0da9731f77d-kube-api-access-4k9qf\") pod \"ddae024a-2888-40f3-954d-f0da9731f77d\" (UID: \"ddae024a-2888-40f3-954d-f0da9731f77d\") " Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.117443 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcmrj\" (UniqueName: \"kubernetes.io/projected/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-kube-api-access-wcmrj\") pod \"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de\" (UID: \"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de\") " Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.117477 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxqqb\" (UniqueName: \"kubernetes.io/projected/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-kube-api-access-pxqqb\") pod \"f74cdf76-d5cc-404a-a5d8-e6e3a3add887\" (UID: \"f74cdf76-d5cc-404a-a5d8-e6e3a3add887\") " Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.117525 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddae024a-2888-40f3-954d-f0da9731f77d-operator-scripts\") pod \"ddae024a-2888-40f3-954d-f0da9731f77d\" (UID: \"ddae024a-2888-40f3-954d-f0da9731f77d\") " Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.117634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-operator-scripts\") pod \"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de\" (UID: \"cbff3f04-bb64-4735-bbaa-ea70fcb6f4de\") " Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.118031 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj8rt\" (UniqueName: \"kubernetes.io/projected/771accfa-3cd3-46ff-8b06-4ffa90c42a6b-kube-api-access-vj8rt\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.118057 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c89e48-2282-4867-ac66-6eff2f352646-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.118071 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t7qn\" (UniqueName: \"kubernetes.io/projected/41c89e48-2282-4867-ac66-6eff2f352646-kube-api-access-5t7qn\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.118866 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddae024a-2888-40f3-954d-f0da9731f77d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ddae024a-2888-40f3-954d-f0da9731f77d" (UID: "ddae024a-2888-40f3-954d-f0da9731f77d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.118951 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbff3f04-bb64-4735-bbaa-ea70fcb6f4de" (UID: "cbff3f04-bb64-4735-bbaa-ea70fcb6f4de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.119129 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f74cdf76-d5cc-404a-a5d8-e6e3a3add887" (UID: "f74cdf76-d5cc-404a-a5d8-e6e3a3add887"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.121996 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-kube-api-access-pxqqb" (OuterVolumeSpecName: "kube-api-access-pxqqb") pod "f74cdf76-d5cc-404a-a5d8-e6e3a3add887" (UID: "f74cdf76-d5cc-404a-a5d8-e6e3a3add887"). InnerVolumeSpecName "kube-api-access-pxqqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.122054 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddae024a-2888-40f3-954d-f0da9731f77d-kube-api-access-4k9qf" (OuterVolumeSpecName: "kube-api-access-4k9qf") pod "ddae024a-2888-40f3-954d-f0da9731f77d" (UID: "ddae024a-2888-40f3-954d-f0da9731f77d"). InnerVolumeSpecName "kube-api-access-4k9qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.122130 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-kube-api-access-wcmrj" (OuterVolumeSpecName: "kube-api-access-wcmrj") pod "cbff3f04-bb64-4735-bbaa-ea70fcb6f4de" (UID: "cbff3f04-bb64-4735-bbaa-ea70fcb6f4de"). InnerVolumeSpecName "kube-api-access-wcmrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.220130 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.220189 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k9qf\" (UniqueName: \"kubernetes.io/projected/ddae024a-2888-40f3-954d-f0da9731f77d-kube-api-access-4k9qf\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.220210 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcmrj\" (UniqueName: \"kubernetes.io/projected/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-kube-api-access-wcmrj\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.220226 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxqqb\" (UniqueName: \"kubernetes.io/projected/f74cdf76-d5cc-404a-a5d8-e6e3a3add887-kube-api-access-pxqqb\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.220240 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddae024a-2888-40f3-954d-f0da9731f77d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.220260 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.848915 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0a4f-account-create-update-5htpx" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.849006 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rdpkb" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.856303 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-444kz" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.856505 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-301c-account-create-update-hfbkp" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.856538 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f7e0-account-create-update-9wvbl" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.856543 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5pl85" Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.856482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-htdjc" event={"ID":"668368d8-8de8-44fa-bf3b-79308dd8e44b","Type":"ContainerStarted","Data":"27e365005a6f687d7a9afdc36c56eadcd863879e46fbe8b491feee933409f858"} Jan 23 18:20:48 crc kubenswrapper[4760]: I0123 18:20:48.890128 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-htdjc" podStartSLOduration=2.339917238 podStartE2EDuration="6.890108215s" podCreationTimestamp="2026-01-23 18:20:42 +0000 UTC" firstStartedPulling="2026-01-23 18:20:43.216701351 +0000 UTC m=+1186.219159284" lastFinishedPulling="2026-01-23 18:20:47.766892318 +0000 UTC m=+1190.769350261" observedRunningTime="2026-01-23 18:20:48.881364953 +0000 UTC m=+1191.883822896" watchObservedRunningTime="2026-01-23 18:20:48.890108215 +0000 UTC m=+1191.892566148" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.216710 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.268634 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dt5ls"] Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.268868 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" podUID="2dd44778-15c8-48cd-86ee-29cf85d7fc7a" containerName="dnsmasq-dns" containerID="cri-o://57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57" gracePeriod=10 Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.718113 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.867167 4760 generic.go:334] "Generic (PLEG): container finished" podID="2dd44778-15c8-48cd-86ee-29cf85d7fc7a" containerID="57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57" exitCode=0 Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.867213 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" event={"ID":"2dd44778-15c8-48cd-86ee-29cf85d7fc7a","Type":"ContainerDied","Data":"57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57"} Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.867250 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" event={"ID":"2dd44778-15c8-48cd-86ee-29cf85d7fc7a","Type":"ContainerDied","Data":"cd7fb0c95b95ec866b943119f6972456fd03cc3cceb59db759ac43a39a93330c"} Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.867251 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dt5ls" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.867272 4760 scope.go:117] "RemoveContainer" containerID="57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.891803 4760 scope.go:117] "RemoveContainer" containerID="16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.901860 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-nb\") pod \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.901956 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-sb\") pod \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.902002 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-config\") pod \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.902037 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-629jw\" (UniqueName: \"kubernetes.io/projected/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-kube-api-access-629jw\") pod \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.902076 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-dns-svc\") pod \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\" (UID: \"2dd44778-15c8-48cd-86ee-29cf85d7fc7a\") " Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.910770 4760 scope.go:117] "RemoveContainer" containerID="57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.910782 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-kube-api-access-629jw" (OuterVolumeSpecName: "kube-api-access-629jw") pod "2dd44778-15c8-48cd-86ee-29cf85d7fc7a" (UID: "2dd44778-15c8-48cd-86ee-29cf85d7fc7a"). InnerVolumeSpecName "kube-api-access-629jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:50 crc kubenswrapper[4760]: E0123 18:20:50.911307 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57\": container with ID starting with 57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57 not found: ID does not exist" containerID="57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.911368 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57"} err="failed to get container status \"57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57\": rpc error: code = NotFound desc = could not find container \"57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57\": container with ID starting with 57225de1439091620d4767d9778dc284e1d997525fe6bc42907d4f5a5740ed57 not found: ID does not exist" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.911430 4760 scope.go:117] "RemoveContainer" containerID="16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32" Jan 23 18:20:50 crc kubenswrapper[4760]: E0123 18:20:50.912341 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32\": container with ID starting with 16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32 not found: ID does not exist" containerID="16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.912394 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32"} err="failed to get container status \"16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32\": rpc error: code = NotFound desc = could not find container \"16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32\": container with ID starting with 16278622977cf18e8a42de00819d11abf1980175b4f08e02e4e94d4d9b781f32 not found: ID does not exist" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.937754 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2dd44778-15c8-48cd-86ee-29cf85d7fc7a" (UID: "2dd44778-15c8-48cd-86ee-29cf85d7fc7a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.943939 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2dd44778-15c8-48cd-86ee-29cf85d7fc7a" (UID: "2dd44778-15c8-48cd-86ee-29cf85d7fc7a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.954261 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-config" (OuterVolumeSpecName: "config") pod "2dd44778-15c8-48cd-86ee-29cf85d7fc7a" (UID: "2dd44778-15c8-48cd-86ee-29cf85d7fc7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:50 crc kubenswrapper[4760]: I0123 18:20:50.982645 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2dd44778-15c8-48cd-86ee-29cf85d7fc7a" (UID: "2dd44778-15c8-48cd-86ee-29cf85d7fc7a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:51 crc kubenswrapper[4760]: I0123 18:20:51.003513 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:51 crc kubenswrapper[4760]: I0123 18:20:51.003547 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:51 crc kubenswrapper[4760]: I0123 18:20:51.003558 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:51 crc kubenswrapper[4760]: I0123 18:20:51.003568 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-629jw\" (UniqueName: \"kubernetes.io/projected/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-kube-api-access-629jw\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:51 crc kubenswrapper[4760]: I0123 18:20:51.003579 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dd44778-15c8-48cd-86ee-29cf85d7fc7a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:51 crc kubenswrapper[4760]: I0123 18:20:51.204122 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dt5ls"] Jan 23 18:20:51 crc kubenswrapper[4760]: I0123 18:20:51.218696 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dt5ls"] Jan 23 18:20:51 crc kubenswrapper[4760]: I0123 18:20:51.605256 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd44778-15c8-48cd-86ee-29cf85d7fc7a" path="/var/lib/kubelet/pods/2dd44778-15c8-48cd-86ee-29cf85d7fc7a/volumes" Jan 23 18:20:51 crc kubenswrapper[4760]: I0123 18:20:51.875681 4760 generic.go:334] "Generic (PLEG): container finished" podID="668368d8-8de8-44fa-bf3b-79308dd8e44b" containerID="27e365005a6f687d7a9afdc36c56eadcd863879e46fbe8b491feee933409f858" exitCode=0 Jan 23 18:20:51 crc kubenswrapper[4760]: I0123 18:20:51.875759 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-htdjc" event={"ID":"668368d8-8de8-44fa-bf3b-79308dd8e44b","Type":"ContainerDied","Data":"27e365005a6f687d7a9afdc36c56eadcd863879e46fbe8b491feee933409f858"} Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.214125 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.349498 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-config-data\") pod \"668368d8-8de8-44fa-bf3b-79308dd8e44b\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.349577 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll9hm\" (UniqueName: \"kubernetes.io/projected/668368d8-8de8-44fa-bf3b-79308dd8e44b-kube-api-access-ll9hm\") pod \"668368d8-8de8-44fa-bf3b-79308dd8e44b\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.349661 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-combined-ca-bundle\") pod \"668368d8-8de8-44fa-bf3b-79308dd8e44b\" (UID: \"668368d8-8de8-44fa-bf3b-79308dd8e44b\") " Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.364103 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668368d8-8de8-44fa-bf3b-79308dd8e44b-kube-api-access-ll9hm" (OuterVolumeSpecName: "kube-api-access-ll9hm") pod "668368d8-8de8-44fa-bf3b-79308dd8e44b" (UID: "668368d8-8de8-44fa-bf3b-79308dd8e44b"). InnerVolumeSpecName "kube-api-access-ll9hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.384736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "668368d8-8de8-44fa-bf3b-79308dd8e44b" (UID: "668368d8-8de8-44fa-bf3b-79308dd8e44b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.403368 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-config-data" (OuterVolumeSpecName: "config-data") pod "668368d8-8de8-44fa-bf3b-79308dd8e44b" (UID: "668368d8-8de8-44fa-bf3b-79308dd8e44b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.451804 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll9hm\" (UniqueName: \"kubernetes.io/projected/668368d8-8de8-44fa-bf3b-79308dd8e44b-kube-api-access-ll9hm\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.451849 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.451861 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668368d8-8de8-44fa-bf3b-79308dd8e44b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.898236 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-htdjc" event={"ID":"668368d8-8de8-44fa-bf3b-79308dd8e44b","Type":"ContainerDied","Data":"a6fb60b86abd9bc9b12d2dfdfdbf5bf737cdf6aa47fa4d9e4ef6be3c0bd46270"} Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.898564 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6fb60b86abd9bc9b12d2dfdfdbf5bf737cdf6aa47fa4d9e4ef6be3c0bd46270" Jan 23 18:20:53 crc kubenswrapper[4760]: I0123 18:20:53.898317 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-htdjc" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.099735 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9c28p"] Jan 23 18:20:54 crc kubenswrapper[4760]: E0123 18:20:54.100186 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668368d8-8de8-44fa-bf3b-79308dd8e44b" containerName="keystone-db-sync" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100236 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="668368d8-8de8-44fa-bf3b-79308dd8e44b" containerName="keystone-db-sync" Jan 23 18:20:54 crc kubenswrapper[4760]: E0123 18:20:54.100253 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="771accfa-3cd3-46ff-8b06-4ffa90c42a6b" containerName="mariadb-database-create" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100262 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="771accfa-3cd3-46ff-8b06-4ffa90c42a6b" containerName="mariadb-database-create" Jan 23 18:20:54 crc kubenswrapper[4760]: E0123 18:20:54.100289 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd44778-15c8-48cd-86ee-29cf85d7fc7a" containerName="dnsmasq-dns" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100299 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd44778-15c8-48cd-86ee-29cf85d7fc7a" containerName="dnsmasq-dns" Jan 23 18:20:54 crc kubenswrapper[4760]: E0123 18:20:54.100315 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddae024a-2888-40f3-954d-f0da9731f77d" containerName="mariadb-account-create-update" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100325 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddae024a-2888-40f3-954d-f0da9731f77d" containerName="mariadb-account-create-update" Jan 23 18:20:54 crc kubenswrapper[4760]: E0123 18:20:54.100340 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c89e48-2282-4867-ac66-6eff2f352646" containerName="mariadb-account-create-update" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100347 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c89e48-2282-4867-ac66-6eff2f352646" containerName="mariadb-account-create-update" Jan 23 18:20:54 crc kubenswrapper[4760]: E0123 18:20:54.100357 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbff3f04-bb64-4735-bbaa-ea70fcb6f4de" containerName="mariadb-database-create" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100365 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbff3f04-bb64-4735-bbaa-ea70fcb6f4de" containerName="mariadb-database-create" Jan 23 18:20:54 crc kubenswrapper[4760]: E0123 18:20:54.100375 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d70bae9-29c6-4259-98cc-6398f6b472a9" containerName="mariadb-account-create-update" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100383 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d70bae9-29c6-4259-98cc-6398f6b472a9" containerName="mariadb-account-create-update" Jan 23 18:20:54 crc kubenswrapper[4760]: E0123 18:20:54.100402 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd44778-15c8-48cd-86ee-29cf85d7fc7a" containerName="init" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100563 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd44778-15c8-48cd-86ee-29cf85d7fc7a" containerName="init" Jan 23 18:20:54 crc kubenswrapper[4760]: E0123 18:20:54.100584 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f74cdf76-d5cc-404a-a5d8-e6e3a3add887" containerName="mariadb-database-create" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100592 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f74cdf76-d5cc-404a-a5d8-e6e3a3add887" containerName="mariadb-database-create" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100796 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd44778-15c8-48cd-86ee-29cf85d7fc7a" containerName="dnsmasq-dns" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100813 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="668368d8-8de8-44fa-bf3b-79308dd8e44b" containerName="keystone-db-sync" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100825 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddae024a-2888-40f3-954d-f0da9731f77d" containerName="mariadb-account-create-update" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100842 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f74cdf76-d5cc-404a-a5d8-e6e3a3add887" containerName="mariadb-database-create" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100855 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d70bae9-29c6-4259-98cc-6398f6b472a9" containerName="mariadb-account-create-update" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100867 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c89e48-2282-4867-ac66-6eff2f352646" containerName="mariadb-account-create-update" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100877 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="771accfa-3cd3-46ff-8b06-4ffa90c42a6b" containerName="mariadb-database-create" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.100887 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbff3f04-bb64-4735-bbaa-ea70fcb6f4de" containerName="mariadb-database-create" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.101872 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.119920 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9c28p"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.138886 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2kw68"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.143457 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.145439 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.145645 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.145805 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9hcxg" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.146133 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.152393 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.156685 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2kw68"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162399 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7999c\" (UniqueName: \"kubernetes.io/projected/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-kube-api-access-7999c\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162504 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162533 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-dns-svc\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162567 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlztm\" (UniqueName: \"kubernetes.io/projected/ce313080-2b39-47d2-93bf-6fcbfd755ae8-kube-api-access-nlztm\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162597 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-combined-ca-bundle\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162651 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-fernet-keys\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162690 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-scripts\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162723 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-config\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162751 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-config-data\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162781 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.162817 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-credential-keys\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265348 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-config-data\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265420 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265459 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-credential-keys\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265530 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7999c\" (UniqueName: \"kubernetes.io/projected/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-kube-api-access-7999c\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265585 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-dns-svc\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265624 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlztm\" (UniqueName: \"kubernetes.io/projected/ce313080-2b39-47d2-93bf-6fcbfd755ae8-kube-api-access-nlztm\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265648 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-combined-ca-bundle\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265670 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-fernet-keys\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265704 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-scripts\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.265733 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-config\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.266596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-config\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.267064 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.267255 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-dns-svc\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.269125 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.274630 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-fernet-keys\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.278312 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-scripts\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.289926 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-credential-keys\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.292032 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-config-data\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.322703 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-combined-ca-bundle\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.323075 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlztm\" (UniqueName: \"kubernetes.io/projected/ce313080-2b39-47d2-93bf-6fcbfd755ae8-kube-api-access-nlztm\") pod \"dnsmasq-dns-6546db6db7-9c28p\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.326529 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.326728 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7999c\" (UniqueName: \"kubernetes.io/projected/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-kube-api-access-7999c\") pod \"keystone-bootstrap-2kw68\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.330242 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.333209 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.333583 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.363717 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-w8mlk"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.365372 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.426354 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.426397 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x5b9j" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.426577 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.435296 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-w8mlk"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.444498 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.451492 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.481558 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.490466 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-8bqhf"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.491518 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.494450 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9dcp9" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.494703 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495519 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-config-data\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-log-httpd\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495568 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-combined-ca-bundle\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495591 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-db-sync-config-data\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495609 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qghcc\" (UniqueName: \"kubernetes.io/projected/b3515d0e-53eb-486e-b727-aaac44882bc2-kube-api-access-qghcc\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495626 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de1f4885-e30d-4dd2-a80c-8960404fc972-etc-machine-id\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495649 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-config-data\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495683 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-scripts\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495702 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnn7t\" (UniqueName: \"kubernetes.io/projected/de1f4885-e30d-4dd2-a80c-8960404fc972-kube-api-access-fnn7t\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-scripts\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495760 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-run-httpd\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.495798 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.539446 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8bqhf"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.553199 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qmvlz"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.554274 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.564779 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-88m77" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.565021 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.565127 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.572617 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qmvlz"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598152 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598188 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-db-sync-config-data\") pod \"barbican-db-sync-8bqhf\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598216 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnn7t\" (UniqueName: \"kubernetes.io/projected/de1f4885-e30d-4dd2-a80c-8960404fc972-kube-api-access-fnn7t\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598233 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-scripts\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598254 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-run-httpd\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598276 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm9m2\" (UniqueName: \"kubernetes.io/projected/d74dde90-69ec-49ed-9531-80aaea5a691e-kube-api-access-mm9m2\") pod \"barbican-db-sync-8bqhf\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598305 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-combined-ca-bundle\") pod \"barbican-db-sync-8bqhf\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598329 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598353 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-config-data\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598371 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-log-httpd\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598394 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-combined-ca-bundle\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598432 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-db-sync-config-data\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qghcc\" (UniqueName: \"kubernetes.io/projected/b3515d0e-53eb-486e-b727-aaac44882bc2-kube-api-access-qghcc\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de1f4885-e30d-4dd2-a80c-8960404fc972-etc-machine-id\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-config-data\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.598520 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-scripts\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.601906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de1f4885-e30d-4dd2-a80c-8960404fc972-etc-machine-id\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.602544 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-log-httpd\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.609879 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-run-httpd\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.615071 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9c28p"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.629550 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qghcc\" (UniqueName: \"kubernetes.io/projected/b3515d0e-53eb-486e-b727-aaac44882bc2-kube-api-access-qghcc\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.629775 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-combined-ca-bundle\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.630616 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-db-sync-config-data\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.630895 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-scripts\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.631111 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.633382 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-scripts\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.635322 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnn7t\" (UniqueName: \"kubernetes.io/projected/de1f4885-e30d-4dd2-a80c-8960404fc972-kube-api-access-fnn7t\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.655802 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wmcdm"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.656436 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-config-data\") pod \"cinder-db-sync-w8mlk\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.656864 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.660421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.661570 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mj5jx" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.661734 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.661863 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.680453 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-config-data\") pod \"ceilometer-0\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.682948 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ccbhl"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.684364 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.697458 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wmcdm"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.702442 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-db-sync-config-data\") pod \"barbican-db-sync-8bqhf\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.702488 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-combined-ca-bundle\") pod \"neutron-db-sync-qmvlz\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.702514 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gltrx\" (UniqueName: \"kubernetes.io/projected/7609f243-769b-47f3-bc58-9f58e68a00a2-kube-api-access-gltrx\") pod \"neutron-db-sync-qmvlz\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.702537 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm9m2\" (UniqueName: \"kubernetes.io/projected/d74dde90-69ec-49ed-9531-80aaea5a691e-kube-api-access-mm9m2\") pod \"barbican-db-sync-8bqhf\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.702565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-combined-ca-bundle\") pod \"barbican-db-sync-8bqhf\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.702590 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-config\") pod \"neutron-db-sync-qmvlz\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.721251 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ccbhl"] Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.723202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-combined-ca-bundle\") pod \"barbican-db-sync-8bqhf\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.738374 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm9m2\" (UniqueName: \"kubernetes.io/projected/d74dde90-69ec-49ed-9531-80aaea5a691e-kube-api-access-mm9m2\") pod \"barbican-db-sync-8bqhf\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.743672 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-db-sync-config-data\") pod \"barbican-db-sync-8bqhf\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.754893 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.755764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.809429 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.809486 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-config-data\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.809531 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-combined-ca-bundle\") pod \"neutron-db-sync-qmvlz\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.809551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-scripts\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.809594 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gltrx\" (UniqueName: \"kubernetes.io/projected/7609f243-769b-47f3-bc58-9f58e68a00a2-kube-api-access-gltrx\") pod \"neutron-db-sync-qmvlz\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.809611 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24q88\" (UniqueName: \"kubernetes.io/projected/6e186470-bf73-4b3b-94ef-0d915a184042-kube-api-access-24q88\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.810036 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-combined-ca-bundle\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.810068 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58l9g\" (UniqueName: \"kubernetes.io/projected/547f1a6e-8dd3-4ef8-928a-73747c6576d6-kube-api-access-58l9g\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.810105 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-config\") pod \"neutron-db-sync-qmvlz\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.810142 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.810159 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-config\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.810200 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/547f1a6e-8dd3-4ef8-928a-73747c6576d6-logs\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.810217 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.813453 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-combined-ca-bundle\") pod \"neutron-db-sync-qmvlz\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.815468 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-config\") pod \"neutron-db-sync-qmvlz\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.831614 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gltrx\" (UniqueName: \"kubernetes.io/projected/7609f243-769b-47f3-bc58-9f58e68a00a2-kube-api-access-gltrx\") pod \"neutron-db-sync-qmvlz\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.904322 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.912101 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-scripts\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.912170 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24q88\" (UniqueName: \"kubernetes.io/projected/6e186470-bf73-4b3b-94ef-0d915a184042-kube-api-access-24q88\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.912198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-combined-ca-bundle\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.912216 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58l9g\" (UniqueName: \"kubernetes.io/projected/547f1a6e-8dd3-4ef8-928a-73747c6576d6-kube-api-access-58l9g\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.912268 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.912286 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-config\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.912316 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/547f1a6e-8dd3-4ef8-928a-73747c6576d6-logs\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.912331 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.912363 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.912386 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-config-data\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.914364 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/547f1a6e-8dd3-4ef8-928a-73747c6576d6-logs\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.915031 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-config\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.915054 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.916342 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.919022 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-scripts\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.920087 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-config-data\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.920117 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.926843 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-combined-ca-bundle\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.937531 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24q88\" (UniqueName: \"kubernetes.io/projected/6e186470-bf73-4b3b-94ef-0d915a184042-kube-api-access-24q88\") pod \"dnsmasq-dns-7987f74bbc-ccbhl\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.940431 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58l9g\" (UniqueName: \"kubernetes.io/projected/547f1a6e-8dd3-4ef8-928a-73747c6576d6-kube-api-access-58l9g\") pod \"placement-db-sync-wmcdm\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.944565 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:20:54 crc kubenswrapper[4760]: I0123 18:20:54.988141 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wmcdm" Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.046056 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.125490 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9c28p"] Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.246033 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2kw68"] Jan 23 18:20:55 crc kubenswrapper[4760]: W0123 18:20:55.266495 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91790f2e_b7a0_4d5e_ac2b_ea160b59352a.slice/crio-920f18f081c4b762fe502196bc406173a334610772bab01bc909f7038d3e74f0 WatchSource:0}: Error finding container 920f18f081c4b762fe502196bc406173a334610772bab01bc909f7038d3e74f0: Status 404 returned error can't find the container with id 920f18f081c4b762fe502196bc406173a334610772bab01bc909f7038d3e74f0 Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.329515 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-w8mlk"] Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.352687 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.521669 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8bqhf"] Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.631630 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ccbhl"] Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.646083 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qmvlz"] Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.670843 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wmcdm"] Jan 23 18:20:55 crc kubenswrapper[4760]: W0123 18:20:55.670937 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod547f1a6e_8dd3_4ef8_928a_73747c6576d6.slice/crio-7d0b1e6d6322d7b6fa289e275fb5867b2fce55f56ea5828f1822b6baed4d67ce WatchSource:0}: Error finding container 7d0b1e6d6322d7b6fa289e275fb5867b2fce55f56ea5828f1822b6baed4d67ce: Status 404 returned error can't find the container with id 7d0b1e6d6322d7b6fa289e275fb5867b2fce55f56ea5828f1822b6baed4d67ce Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.952111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2kw68" event={"ID":"91790f2e-b7a0-4d5e-ac2b-ea160b59352a","Type":"ContainerStarted","Data":"920f18f081c4b762fe502196bc406173a334610772bab01bc909f7038d3e74f0"} Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.952820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3515d0e-53eb-486e-b727-aaac44882bc2","Type":"ContainerStarted","Data":"4c74ec106ff19052f8f16dbffc6dda6d205e10772c32786e5eebdc01d4057bf4"} Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.953467 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8bqhf" event={"ID":"d74dde90-69ec-49ed-9531-80aaea5a691e","Type":"ContainerStarted","Data":"ce9d8c6f9f7f4f299fec02fa99c5dd52504a5251201adaefa2bc55b3d0523312"} Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.954087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w8mlk" event={"ID":"de1f4885-e30d-4dd2-a80c-8960404fc972","Type":"ContainerStarted","Data":"dd758ac5e64f0f3a0ae4598adda05a098519a2c8e16c5219d4dd93170041495b"} Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.956235 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qmvlz" event={"ID":"7609f243-769b-47f3-bc58-9f58e68a00a2","Type":"ContainerStarted","Data":"25b698ecbf1487af89f0e85b74879680a04d10b5fa435d6a484f17b7c63e0fd1"} Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.956945 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-9c28p" event={"ID":"ce313080-2b39-47d2-93bf-6fcbfd755ae8","Type":"ContainerStarted","Data":"087ff2033ca9de7de6bd0a54c5349c75189693405057ddb5de4af692d9f54f23"} Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.957941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wmcdm" event={"ID":"547f1a6e-8dd3-4ef8-928a-73747c6576d6","Type":"ContainerStarted","Data":"7d0b1e6d6322d7b6fa289e275fb5867b2fce55f56ea5828f1822b6baed4d67ce"} Jan 23 18:20:55 crc kubenswrapper[4760]: I0123 18:20:55.959479 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" event={"ID":"6e186470-bf73-4b3b-94ef-0d915a184042","Type":"ContainerStarted","Data":"9e3f611db00d144be6d9248b81bf00dd85c92d67a2016ed811d225bf0efbc56a"} Jan 23 18:20:56 crc kubenswrapper[4760]: I0123 18:20:56.692785 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:20:57 crc kubenswrapper[4760]: I0123 18:20:57.983032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qmvlz" event={"ID":"7609f243-769b-47f3-bc58-9f58e68a00a2","Type":"ContainerStarted","Data":"a164895318fc686ad18b97be09c3d6db887de7c03b90f97782e85ed8f19d1efc"} Jan 23 18:20:57 crc kubenswrapper[4760]: I0123 18:20:57.987296 4760 generic.go:334] "Generic (PLEG): container finished" podID="ce313080-2b39-47d2-93bf-6fcbfd755ae8" containerID="45e6f993c2346df7f9ea24a79c9973eaaf0593969aba405a09154a8fbdfb7ab2" exitCode=0 Jan 23 18:20:57 crc kubenswrapper[4760]: I0123 18:20:57.987361 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-9c28p" event={"ID":"ce313080-2b39-47d2-93bf-6fcbfd755ae8","Type":"ContainerDied","Data":"45e6f993c2346df7f9ea24a79c9973eaaf0593969aba405a09154a8fbdfb7ab2"} Jan 23 18:20:57 crc kubenswrapper[4760]: I0123 18:20:57.989566 4760 generic.go:334] "Generic (PLEG): container finished" podID="6e186470-bf73-4b3b-94ef-0d915a184042" containerID="06e35df7886d3b885e0b63e76316ff5b649e7cd60354721fb501b6847bec5658" exitCode=0 Jan 23 18:20:57 crc kubenswrapper[4760]: I0123 18:20:57.989636 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" event={"ID":"6e186470-bf73-4b3b-94ef-0d915a184042","Type":"ContainerDied","Data":"06e35df7886d3b885e0b63e76316ff5b649e7cd60354721fb501b6847bec5658"} Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.027141 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2kw68" event={"ID":"91790f2e-b7a0-4d5e-ac2b-ea160b59352a","Type":"ContainerStarted","Data":"30b76e733373960a876cabab69dcb0f5698d16ae09790b03b72f547e1b1050ae"} Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.039152 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qmvlz" podStartSLOduration=4.03913171 podStartE2EDuration="4.03913171s" podCreationTimestamp="2026-01-23 18:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:57.999074398 +0000 UTC m=+1201.001532331" watchObservedRunningTime="2026-01-23 18:20:58.03913171 +0000 UTC m=+1201.041589643" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.094239 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2kw68" podStartSLOduration=4.094218717 podStartE2EDuration="4.094218717s" podCreationTimestamp="2026-01-23 18:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:58.079097558 +0000 UTC m=+1201.081555501" watchObservedRunningTime="2026-01-23 18:20:58.094218717 +0000 UTC m=+1201.096676650" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.411815 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.483139 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-dns-svc\") pod \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.483197 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-config\") pod \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.483270 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-sb\") pod \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.483346 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-nb\") pod \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.483372 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlztm\" (UniqueName: \"kubernetes.io/projected/ce313080-2b39-47d2-93bf-6fcbfd755ae8-kube-api-access-nlztm\") pod \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\" (UID: \"ce313080-2b39-47d2-93bf-6fcbfd755ae8\") " Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.500347 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce313080-2b39-47d2-93bf-6fcbfd755ae8-kube-api-access-nlztm" (OuterVolumeSpecName: "kube-api-access-nlztm") pod "ce313080-2b39-47d2-93bf-6fcbfd755ae8" (UID: "ce313080-2b39-47d2-93bf-6fcbfd755ae8"). InnerVolumeSpecName "kube-api-access-nlztm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.516518 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce313080-2b39-47d2-93bf-6fcbfd755ae8" (UID: "ce313080-2b39-47d2-93bf-6fcbfd755ae8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.519788 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce313080-2b39-47d2-93bf-6fcbfd755ae8" (UID: "ce313080-2b39-47d2-93bf-6fcbfd755ae8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.520205 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce313080-2b39-47d2-93bf-6fcbfd755ae8" (UID: "ce313080-2b39-47d2-93bf-6fcbfd755ae8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.525095 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-config" (OuterVolumeSpecName: "config") pod "ce313080-2b39-47d2-93bf-6fcbfd755ae8" (UID: "ce313080-2b39-47d2-93bf-6fcbfd755ae8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.584968 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.585002 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlztm\" (UniqueName: \"kubernetes.io/projected/ce313080-2b39-47d2-93bf-6fcbfd755ae8-kube-api-access-nlztm\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.585014 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.585023 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:58 crc kubenswrapper[4760]: I0123 18:20:58.585033 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce313080-2b39-47d2-93bf-6fcbfd755ae8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:59 crc kubenswrapper[4760]: I0123 18:20:59.044318 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" event={"ID":"6e186470-bf73-4b3b-94ef-0d915a184042","Type":"ContainerStarted","Data":"b2552b802d78033173efb318d548d2fa4f6aa3973d7117565c6dfb5a85464dcc"} Jan 23 18:20:59 crc kubenswrapper[4760]: I0123 18:20:59.044382 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:20:59 crc kubenswrapper[4760]: I0123 18:20:59.062690 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-9c28p" Jan 23 18:20:59 crc kubenswrapper[4760]: I0123 18:20:59.063751 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-9c28p" event={"ID":"ce313080-2b39-47d2-93bf-6fcbfd755ae8","Type":"ContainerDied","Data":"087ff2033ca9de7de6bd0a54c5349c75189693405057ddb5de4af692d9f54f23"} Jan 23 18:20:59 crc kubenswrapper[4760]: I0123 18:20:59.063817 4760 scope.go:117] "RemoveContainer" containerID="45e6f993c2346df7f9ea24a79c9973eaaf0593969aba405a09154a8fbdfb7ab2" Jan 23 18:20:59 crc kubenswrapper[4760]: I0123 18:20:59.074670 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" podStartSLOduration=5.074651964 podStartE2EDuration="5.074651964s" podCreationTimestamp="2026-01-23 18:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:59.067530817 +0000 UTC m=+1202.069988750" watchObservedRunningTime="2026-01-23 18:20:59.074651964 +0000 UTC m=+1202.077109897" Jan 23 18:20:59 crc kubenswrapper[4760]: I0123 18:20:59.150501 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9c28p"] Jan 23 18:20:59 crc kubenswrapper[4760]: I0123 18:20:59.164334 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-9c28p"] Jan 23 18:20:59 crc kubenswrapper[4760]: I0123 18:20:59.604551 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce313080-2b39-47d2-93bf-6fcbfd755ae8" path="/var/lib/kubelet/pods/ce313080-2b39-47d2-93bf-6fcbfd755ae8/volumes" Jan 23 18:21:01 crc kubenswrapper[4760]: I0123 18:21:01.078216 4760 generic.go:334] "Generic (PLEG): container finished" podID="91790f2e-b7a0-4d5e-ac2b-ea160b59352a" containerID="30b76e733373960a876cabab69dcb0f5698d16ae09790b03b72f547e1b1050ae" exitCode=0 Jan 23 18:21:01 crc kubenswrapper[4760]: I0123 18:21:01.078259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2kw68" event={"ID":"91790f2e-b7a0-4d5e-ac2b-ea160b59352a","Type":"ContainerDied","Data":"30b76e733373960a876cabab69dcb0f5698d16ae09790b03b72f547e1b1050ae"} Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.556319 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.687073 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-combined-ca-bundle\") pod \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.687176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-credential-keys\") pod \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.687307 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7999c\" (UniqueName: \"kubernetes.io/projected/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-kube-api-access-7999c\") pod \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.687376 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-fernet-keys\") pod \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.687470 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-config-data\") pod \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.687531 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-scripts\") pod \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\" (UID: \"91790f2e-b7a0-4d5e-ac2b-ea160b59352a\") " Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.694651 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "91790f2e-b7a0-4d5e-ac2b-ea160b59352a" (UID: "91790f2e-b7a0-4d5e-ac2b-ea160b59352a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.694687 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-scripts" (OuterVolumeSpecName: "scripts") pod "91790f2e-b7a0-4d5e-ac2b-ea160b59352a" (UID: "91790f2e-b7a0-4d5e-ac2b-ea160b59352a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.695929 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "91790f2e-b7a0-4d5e-ac2b-ea160b59352a" (UID: "91790f2e-b7a0-4d5e-ac2b-ea160b59352a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.696090 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-kube-api-access-7999c" (OuterVolumeSpecName: "kube-api-access-7999c") pod "91790f2e-b7a0-4d5e-ac2b-ea160b59352a" (UID: "91790f2e-b7a0-4d5e-ac2b-ea160b59352a"). InnerVolumeSpecName "kube-api-access-7999c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.720710 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-config-data" (OuterVolumeSpecName: "config-data") pod "91790f2e-b7a0-4d5e-ac2b-ea160b59352a" (UID: "91790f2e-b7a0-4d5e-ac2b-ea160b59352a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.734588 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91790f2e-b7a0-4d5e-ac2b-ea160b59352a" (UID: "91790f2e-b7a0-4d5e-ac2b-ea160b59352a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.789558 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.789603 4760 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.789617 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7999c\" (UniqueName: \"kubernetes.io/projected/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-kube-api-access-7999c\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.789631 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.789643 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:03 crc kubenswrapper[4760]: I0123 18:21:03.789655 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91790f2e-b7a0-4d5e-ac2b-ea160b59352a-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.106808 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2kw68" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.106821 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2kw68" event={"ID":"91790f2e-b7a0-4d5e-ac2b-ea160b59352a","Type":"ContainerDied","Data":"920f18f081c4b762fe502196bc406173a334610772bab01bc909f7038d3e74f0"} Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.107242 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="920f18f081c4b762fe502196bc406173a334610772bab01bc909f7038d3e74f0" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.726444 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2kw68"] Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.737698 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2kw68"] Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.820631 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-j22ps"] Jan 23 18:21:04 crc kubenswrapper[4760]: E0123 18:21:04.821084 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91790f2e-b7a0-4d5e-ac2b-ea160b59352a" containerName="keystone-bootstrap" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.821108 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="91790f2e-b7a0-4d5e-ac2b-ea160b59352a" containerName="keystone-bootstrap" Jan 23 18:21:04 crc kubenswrapper[4760]: E0123 18:21:04.821122 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce313080-2b39-47d2-93bf-6fcbfd755ae8" containerName="init" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.821129 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce313080-2b39-47d2-93bf-6fcbfd755ae8" containerName="init" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.821316 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="91790f2e-b7a0-4d5e-ac2b-ea160b59352a" containerName="keystone-bootstrap" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.821345 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce313080-2b39-47d2-93bf-6fcbfd755ae8" containerName="init" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.822030 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.823948 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.823986 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.824110 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9hcxg" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.824215 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.824974 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.832460 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j22ps"] Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.929265 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-fernet-keys\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.929327 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-scripts\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.929540 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-config-data\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.929588 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-credential-keys\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.929676 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-combined-ca-bundle\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:04 crc kubenswrapper[4760]: I0123 18:21:04.929704 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qgpq\" (UniqueName: \"kubernetes.io/projected/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-kube-api-access-7qgpq\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.031155 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-combined-ca-bundle\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.031196 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qgpq\" (UniqueName: \"kubernetes.io/projected/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-kube-api-access-7qgpq\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.031255 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-fernet-keys\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.031283 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-scripts\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.031353 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-config-data\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.031379 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-credential-keys\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.037002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-combined-ca-bundle\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.037081 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-credential-keys\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.037799 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-fernet-keys\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.038019 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-scripts\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.038707 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-config-data\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.048677 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.053697 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qgpq\" (UniqueName: \"kubernetes.io/projected/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-kube-api-access-7qgpq\") pod \"keystone-bootstrap-j22ps\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.118432 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4vklh"] Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.119203 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" podUID="904bcdd2-189f-4b64-9953-612341192088" containerName="dnsmasq-dns" containerID="cri-o://01e490445413872c436809c30f5371018345c884c451493a52ae537253d573df" gracePeriod=10 Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.144665 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.215878 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" podUID="904bcdd2-189f-4b64-9953-612341192088" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 23 18:21:05 crc kubenswrapper[4760]: I0123 18:21:05.604838 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91790f2e-b7a0-4d5e-ac2b-ea160b59352a" path="/var/lib/kubelet/pods/91790f2e-b7a0-4d5e-ac2b-ea160b59352a/volumes" Jan 23 18:21:06 crc kubenswrapper[4760]: I0123 18:21:06.128390 4760 generic.go:334] "Generic (PLEG): container finished" podID="904bcdd2-189f-4b64-9953-612341192088" containerID="01e490445413872c436809c30f5371018345c884c451493a52ae537253d573df" exitCode=0 Jan 23 18:21:06 crc kubenswrapper[4760]: I0123 18:21:06.128469 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" event={"ID":"904bcdd2-189f-4b64-9953-612341192088","Type":"ContainerDied","Data":"01e490445413872c436809c30f5371018345c884c451493a52ae537253d573df"} Jan 23 18:21:10 crc kubenswrapper[4760]: I0123 18:21:10.216242 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" podUID="904bcdd2-189f-4b64-9953-612341192088" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 23 18:21:15 crc kubenswrapper[4760]: I0123 18:21:15.216068 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" podUID="904bcdd2-189f-4b64-9953-612341192088" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 23 18:21:15 crc kubenswrapper[4760]: I0123 18:21:15.217038 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:21:16 crc kubenswrapper[4760]: I0123 18:21:16.076550 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:21:16 crc kubenswrapper[4760]: I0123 18:21:16.076671 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:21:20 crc kubenswrapper[4760]: I0123 18:21:20.215739 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" podUID="904bcdd2-189f-4b64-9953-612341192088" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 23 18:21:23 crc kubenswrapper[4760]: E0123 18:21:23.694054 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 23 18:21:23 crc kubenswrapper[4760]: E0123 18:21:23.694808 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnn7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-w8mlk_openstack(de1f4885-e30d-4dd2-a80c-8960404fc972): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:21:23 crc kubenswrapper[4760]: E0123 18:21:23.696438 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-w8mlk" podUID="de1f4885-e30d-4dd2-a80c-8960404fc972" Jan 23 18:21:24 crc kubenswrapper[4760]: E0123 18:21:24.069215 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 23 18:21:24 crc kubenswrapper[4760]: E0123 18:21:24.069382 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mm9m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-8bqhf_openstack(d74dde90-69ec-49ed-9531-80aaea5a691e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:21:24 crc kubenswrapper[4760]: E0123 18:21:24.070958 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-8bqhf" podUID="d74dde90-69ec-49ed-9531-80aaea5a691e" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.289048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" event={"ID":"904bcdd2-189f-4b64-9953-612341192088","Type":"ContainerDied","Data":"51f1052bc138d0f0887200127124d1a79cee0a5bdf1074f8d5b3e0b36ac5fde9"} Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.289301 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51f1052bc138d0f0887200127124d1a79cee0a5bdf1074f8d5b3e0b36ac5fde9" Jan 23 18:21:24 crc kubenswrapper[4760]: E0123 18:21:24.290952 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-w8mlk" podUID="de1f4885-e30d-4dd2-a80c-8960404fc972" Jan 23 18:21:24 crc kubenswrapper[4760]: E0123 18:21:24.292078 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-8bqhf" podUID="d74dde90-69ec-49ed-9531-80aaea5a691e" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.373958 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.466974 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sk5c\" (UniqueName: \"kubernetes.io/projected/904bcdd2-189f-4b64-9953-612341192088-kube-api-access-2sk5c\") pod \"904bcdd2-189f-4b64-9953-612341192088\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.467261 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-dns-svc\") pod \"904bcdd2-189f-4b64-9953-612341192088\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.467285 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-sb\") pod \"904bcdd2-189f-4b64-9953-612341192088\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.467328 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-config\") pod \"904bcdd2-189f-4b64-9953-612341192088\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.467450 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-nb\") pod \"904bcdd2-189f-4b64-9953-612341192088\" (UID: \"904bcdd2-189f-4b64-9953-612341192088\") " Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.476163 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904bcdd2-189f-4b64-9953-612341192088-kube-api-access-2sk5c" (OuterVolumeSpecName: "kube-api-access-2sk5c") pod "904bcdd2-189f-4b64-9953-612341192088" (UID: "904bcdd2-189f-4b64-9953-612341192088"). InnerVolumeSpecName "kube-api-access-2sk5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.505303 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j22ps"] Jan 23 18:21:24 crc kubenswrapper[4760]: W0123 18:21:24.509975 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35fc4c6e_7c68_44cb_bd4e_fc34214ed151.slice/crio-fd96dc7bb657f0f37b3724276e855092edc59e8ecf2bd8b48e2a36c84e2dc1da WatchSource:0}: Error finding container fd96dc7bb657f0f37b3724276e855092edc59e8ecf2bd8b48e2a36c84e2dc1da: Status 404 returned error can't find the container with id fd96dc7bb657f0f37b3724276e855092edc59e8ecf2bd8b48e2a36c84e2dc1da Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.519956 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-config" (OuterVolumeSpecName: "config") pod "904bcdd2-189f-4b64-9953-612341192088" (UID: "904bcdd2-189f-4b64-9953-612341192088"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.523168 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "904bcdd2-189f-4b64-9953-612341192088" (UID: "904bcdd2-189f-4b64-9953-612341192088"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.528089 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "904bcdd2-189f-4b64-9953-612341192088" (UID: "904bcdd2-189f-4b64-9953-612341192088"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.534988 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "904bcdd2-189f-4b64-9953-612341192088" (UID: "904bcdd2-189f-4b64-9953-612341192088"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.570297 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sk5c\" (UniqueName: \"kubernetes.io/projected/904bcdd2-189f-4b64-9953-612341192088-kube-api-access-2sk5c\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.570329 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.570339 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.570349 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:24 crc kubenswrapper[4760]: I0123 18:21:24.570376 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/904bcdd2-189f-4b64-9953-612341192088-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:25 crc kubenswrapper[4760]: I0123 18:21:25.299663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j22ps" event={"ID":"35fc4c6e-7c68-44cb-bd4e-fc34214ed151","Type":"ContainerStarted","Data":"bdedda89a25fb36ba1a9b33449505263fb6cf9b8fc001497d8b05d601b65e855"} Jan 23 18:21:25 crc kubenswrapper[4760]: I0123 18:21:25.299992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j22ps" event={"ID":"35fc4c6e-7c68-44cb-bd4e-fc34214ed151","Type":"ContainerStarted","Data":"fd96dc7bb657f0f37b3724276e855092edc59e8ecf2bd8b48e2a36c84e2dc1da"} Jan 23 18:21:25 crc kubenswrapper[4760]: I0123 18:21:25.304394 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wmcdm" event={"ID":"547f1a6e-8dd3-4ef8-928a-73747c6576d6","Type":"ContainerStarted","Data":"913909bcba966ba4bba8bca7ce252ed7a85b5c71b9558b99c3572f953124c1a3"} Jan 23 18:21:25 crc kubenswrapper[4760]: I0123 18:21:25.306970 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4vklh" Jan 23 18:21:25 crc kubenswrapper[4760]: I0123 18:21:25.309103 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3515d0e-53eb-486e-b727-aaac44882bc2","Type":"ContainerStarted","Data":"f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65"} Jan 23 18:21:25 crc kubenswrapper[4760]: I0123 18:21:25.321005 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-j22ps" podStartSLOduration=21.320985812 podStartE2EDuration="21.320985812s" podCreationTimestamp="2026-01-23 18:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:25.317510266 +0000 UTC m=+1228.319968209" watchObservedRunningTime="2026-01-23 18:21:25.320985812 +0000 UTC m=+1228.323443755" Jan 23 18:21:25 crc kubenswrapper[4760]: I0123 18:21:25.335650 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wmcdm" podStartSLOduration=2.937420769 podStartE2EDuration="31.335631559s" podCreationTimestamp="2026-01-23 18:20:54 +0000 UTC" firstStartedPulling="2026-01-23 18:20:55.672742538 +0000 UTC m=+1198.675200471" lastFinishedPulling="2026-01-23 18:21:24.070953328 +0000 UTC m=+1227.073411261" observedRunningTime="2026-01-23 18:21:25.334832486 +0000 UTC m=+1228.337290439" watchObservedRunningTime="2026-01-23 18:21:25.335631559 +0000 UTC m=+1228.338089492" Jan 23 18:21:25 crc kubenswrapper[4760]: I0123 18:21:25.363037 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4vklh"] Jan 23 18:21:25 crc kubenswrapper[4760]: I0123 18:21:25.374965 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4vklh"] Jan 23 18:21:25 crc kubenswrapper[4760]: I0123 18:21:25.603217 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="904bcdd2-189f-4b64-9953-612341192088" path="/var/lib/kubelet/pods/904bcdd2-189f-4b64-9953-612341192088/volumes" Jan 23 18:21:26 crc kubenswrapper[4760]: I0123 18:21:26.319382 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3515d0e-53eb-486e-b727-aaac44882bc2","Type":"ContainerStarted","Data":"cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df"} Jan 23 18:21:27 crc kubenswrapper[4760]: I0123 18:21:27.326332 4760 generic.go:334] "Generic (PLEG): container finished" podID="547f1a6e-8dd3-4ef8-928a-73747c6576d6" containerID="913909bcba966ba4bba8bca7ce252ed7a85b5c71b9558b99c3572f953124c1a3" exitCode=0 Jan 23 18:21:27 crc kubenswrapper[4760]: I0123 18:21:27.326460 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wmcdm" event={"ID":"547f1a6e-8dd3-4ef8-928a-73747c6576d6","Type":"ContainerDied","Data":"913909bcba966ba4bba8bca7ce252ed7a85b5c71b9558b99c3572f953124c1a3"} Jan 23 18:21:28 crc kubenswrapper[4760]: I0123 18:21:28.335257 4760 generic.go:334] "Generic (PLEG): container finished" podID="35fc4c6e-7c68-44cb-bd4e-fc34214ed151" containerID="bdedda89a25fb36ba1a9b33449505263fb6cf9b8fc001497d8b05d601b65e855" exitCode=0 Jan 23 18:21:28 crc kubenswrapper[4760]: I0123 18:21:28.335346 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j22ps" event={"ID":"35fc4c6e-7c68-44cb-bd4e-fc34214ed151","Type":"ContainerDied","Data":"bdedda89a25fb36ba1a9b33449505263fb6cf9b8fc001497d8b05d601b65e855"} Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.614019 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wmcdm" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.743274 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-combined-ca-bundle\") pod \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.743502 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/547f1a6e-8dd3-4ef8-928a-73747c6576d6-logs\") pod \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.743553 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-config-data\") pod \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.743592 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58l9g\" (UniqueName: \"kubernetes.io/projected/547f1a6e-8dd3-4ef8-928a-73747c6576d6-kube-api-access-58l9g\") pod \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.743614 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-scripts\") pod \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\" (UID: \"547f1a6e-8dd3-4ef8-928a-73747c6576d6\") " Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.743880 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/547f1a6e-8dd3-4ef8-928a-73747c6576d6-logs" (OuterVolumeSpecName: "logs") pod "547f1a6e-8dd3-4ef8-928a-73747c6576d6" (UID: "547f1a6e-8dd3-4ef8-928a-73747c6576d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.744096 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/547f1a6e-8dd3-4ef8-928a-73747c6576d6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.749351 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547f1a6e-8dd3-4ef8-928a-73747c6576d6-kube-api-access-58l9g" (OuterVolumeSpecName: "kube-api-access-58l9g") pod "547f1a6e-8dd3-4ef8-928a-73747c6576d6" (UID: "547f1a6e-8dd3-4ef8-928a-73747c6576d6"). InnerVolumeSpecName "kube-api-access-58l9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.749749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-scripts" (OuterVolumeSpecName: "scripts") pod "547f1a6e-8dd3-4ef8-928a-73747c6576d6" (UID: "547f1a6e-8dd3-4ef8-928a-73747c6576d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.778806 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-config-data" (OuterVolumeSpecName: "config-data") pod "547f1a6e-8dd3-4ef8-928a-73747c6576d6" (UID: "547f1a6e-8dd3-4ef8-928a-73747c6576d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.794217 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "547f1a6e-8dd3-4ef8-928a-73747c6576d6" (UID: "547f1a6e-8dd3-4ef8-928a-73747c6576d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.845151 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.845175 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58l9g\" (UniqueName: \"kubernetes.io/projected/547f1a6e-8dd3-4ef8-928a-73747c6576d6-kube-api-access-58l9g\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.845186 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:28.845198 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/547f1a6e-8dd3-4ef8-928a-73747c6576d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.345181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wmcdm" event={"ID":"547f1a6e-8dd3-4ef8-928a-73747c6576d6","Type":"ContainerDied","Data":"7d0b1e6d6322d7b6fa289e275fb5867b2fce55f56ea5828f1822b6baed4d67ce"} Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.345215 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wmcdm" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.345233 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d0b1e6d6322d7b6fa289e275fb5867b2fce55f56ea5828f1822b6baed4d67ce" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.764561 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-66645c546d-bcr2r"] Jan 23 18:21:29 crc kubenswrapper[4760]: E0123 18:21:29.765201 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904bcdd2-189f-4b64-9953-612341192088" containerName="init" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.765214 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="904bcdd2-189f-4b64-9953-612341192088" containerName="init" Jan 23 18:21:29 crc kubenswrapper[4760]: E0123 18:21:29.765225 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904bcdd2-189f-4b64-9953-612341192088" containerName="dnsmasq-dns" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.765231 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="904bcdd2-189f-4b64-9953-612341192088" containerName="dnsmasq-dns" Jan 23 18:21:29 crc kubenswrapper[4760]: E0123 18:21:29.765250 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547f1a6e-8dd3-4ef8-928a-73747c6576d6" containerName="placement-db-sync" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.765256 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="547f1a6e-8dd3-4ef8-928a-73747c6576d6" containerName="placement-db-sync" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.765399 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="904bcdd2-189f-4b64-9953-612341192088" containerName="dnsmasq-dns" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.765422 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="547f1a6e-8dd3-4ef8-928a-73747c6576d6" containerName="placement-db-sync" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.766612 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.770398 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.770675 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.770822 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mj5jx" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.770961 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.771120 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.778639 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66645c546d-bcr2r"] Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.874361 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-scripts\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.874469 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-combined-ca-bundle\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.874524 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-internal-tls-certs\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.874620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-public-tls-certs\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.874647 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40c8acd-aef7-4575-bf9b-18a4e220b34b-logs\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.874683 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-config-data\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.874805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csr44\" (UniqueName: \"kubernetes.io/projected/f40c8acd-aef7-4575-bf9b-18a4e220b34b-kube-api-access-csr44\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.976940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-combined-ca-bundle\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.977053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-internal-tls-certs\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.977181 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-public-tls-certs\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.977220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40c8acd-aef7-4575-bf9b-18a4e220b34b-logs\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.977257 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-config-data\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.977335 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csr44\" (UniqueName: \"kubernetes.io/projected/f40c8acd-aef7-4575-bf9b-18a4e220b34b-kube-api-access-csr44\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.977393 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-scripts\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.978338 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40c8acd-aef7-4575-bf9b-18a4e220b34b-logs\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.983554 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-internal-tls-certs\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.983854 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-config-data\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.988611 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-combined-ca-bundle\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.988680 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-scripts\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:29 crc kubenswrapper[4760]: I0123 18:21:29.995160 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f40c8acd-aef7-4575-bf9b-18a4e220b34b-public-tls-certs\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:30 crc kubenswrapper[4760]: I0123 18:21:30.002398 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csr44\" (UniqueName: \"kubernetes.io/projected/f40c8acd-aef7-4575-bf9b-18a4e220b34b-kube-api-access-csr44\") pod \"placement-66645c546d-bcr2r\" (UID: \"f40c8acd-aef7-4575-bf9b-18a4e220b34b\") " pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:30 crc kubenswrapper[4760]: I0123 18:21:30.088949 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.364162 4760 generic.go:334] "Generic (PLEG): container finished" podID="7609f243-769b-47f3-bc58-9f58e68a00a2" containerID="a164895318fc686ad18b97be09c3d6db887de7c03b90f97782e85ed8f19d1efc" exitCode=0 Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.364329 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qmvlz" event={"ID":"7609f243-769b-47f3-bc58-9f58e68a00a2","Type":"ContainerDied","Data":"a164895318fc686ad18b97be09c3d6db887de7c03b90f97782e85ed8f19d1efc"} Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.713001 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.809321 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-credential-keys\") pod \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.809400 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qgpq\" (UniqueName: \"kubernetes.io/projected/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-kube-api-access-7qgpq\") pod \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.809535 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-fernet-keys\") pod \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.809563 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-scripts\") pod \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.809707 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-combined-ca-bundle\") pod \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.809730 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-config-data\") pod \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\" (UID: \"35fc4c6e-7c68-44cb-bd4e-fc34214ed151\") " Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.819336 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "35fc4c6e-7c68-44cb-bd4e-fc34214ed151" (UID: "35fc4c6e-7c68-44cb-bd4e-fc34214ed151"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.819431 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-scripts" (OuterVolumeSpecName: "scripts") pod "35fc4c6e-7c68-44cb-bd4e-fc34214ed151" (UID: "35fc4c6e-7c68-44cb-bd4e-fc34214ed151"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.819568 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-kube-api-access-7qgpq" (OuterVolumeSpecName: "kube-api-access-7qgpq") pod "35fc4c6e-7c68-44cb-bd4e-fc34214ed151" (UID: "35fc4c6e-7c68-44cb-bd4e-fc34214ed151"). InnerVolumeSpecName "kube-api-access-7qgpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.819804 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "35fc4c6e-7c68-44cb-bd4e-fc34214ed151" (UID: "35fc4c6e-7c68-44cb-bd4e-fc34214ed151"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.857655 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-config-data" (OuterVolumeSpecName: "config-data") pod "35fc4c6e-7c68-44cb-bd4e-fc34214ed151" (UID: "35fc4c6e-7c68-44cb-bd4e-fc34214ed151"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.862192 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35fc4c6e-7c68-44cb-bd4e-fc34214ed151" (UID: "35fc4c6e-7c68-44cb-bd4e-fc34214ed151"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.911890 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qgpq\" (UniqueName: \"kubernetes.io/projected/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-kube-api-access-7qgpq\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.911935 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.911948 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.911961 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.911974 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:31 crc kubenswrapper[4760]: I0123 18:21:31.911986 4760 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35fc4c6e-7c68-44cb-bd4e-fc34214ed151-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.126037 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66645c546d-bcr2r"] Jan 23 18:21:32 crc kubenswrapper[4760]: W0123 18:21:32.131251 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf40c8acd_aef7_4575_bf9b_18a4e220b34b.slice/crio-65486721f64c25a4bd0b1e98e376ff1e8b9690b2162c68475ac3f80bbdab1b3b WatchSource:0}: Error finding container 65486721f64c25a4bd0b1e98e376ff1e8b9690b2162c68475ac3f80bbdab1b3b: Status 404 returned error can't find the container with id 65486721f64c25a4bd0b1e98e376ff1e8b9690b2162c68475ac3f80bbdab1b3b Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.411043 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j22ps" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.411052 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j22ps" event={"ID":"35fc4c6e-7c68-44cb-bd4e-fc34214ed151","Type":"ContainerDied","Data":"fd96dc7bb657f0f37b3724276e855092edc59e8ecf2bd8b48e2a36c84e2dc1da"} Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.411443 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd96dc7bb657f0f37b3724276e855092edc59e8ecf2bd8b48e2a36c84e2dc1da" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.412743 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66645c546d-bcr2r" event={"ID":"f40c8acd-aef7-4575-bf9b-18a4e220b34b","Type":"ContainerStarted","Data":"ec9d6959c55d36713ad7e70ba13047e1683115536b8789988ebb23150bcefb8e"} Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.412771 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66645c546d-bcr2r" event={"ID":"f40c8acd-aef7-4575-bf9b-18a4e220b34b","Type":"ContainerStarted","Data":"65486721f64c25a4bd0b1e98e376ff1e8b9690b2162c68475ac3f80bbdab1b3b"} Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.415331 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3515d0e-53eb-486e-b727-aaac44882bc2","Type":"ContainerStarted","Data":"0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a"} Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.632603 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.727671 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-combined-ca-bundle\") pod \"7609f243-769b-47f3-bc58-9f58e68a00a2\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.727758 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gltrx\" (UniqueName: \"kubernetes.io/projected/7609f243-769b-47f3-bc58-9f58e68a00a2-kube-api-access-gltrx\") pod \"7609f243-769b-47f3-bc58-9f58e68a00a2\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.727964 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-config\") pod \"7609f243-769b-47f3-bc58-9f58e68a00a2\" (UID: \"7609f243-769b-47f3-bc58-9f58e68a00a2\") " Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.732481 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7609f243-769b-47f3-bc58-9f58e68a00a2-kube-api-access-gltrx" (OuterVolumeSpecName: "kube-api-access-gltrx") pod "7609f243-769b-47f3-bc58-9f58e68a00a2" (UID: "7609f243-769b-47f3-bc58-9f58e68a00a2"). InnerVolumeSpecName "kube-api-access-gltrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.783160 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7609f243-769b-47f3-bc58-9f58e68a00a2" (UID: "7609f243-769b-47f3-bc58-9f58e68a00a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.828791 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-config" (OuterVolumeSpecName: "config") pod "7609f243-769b-47f3-bc58-9f58e68a00a2" (UID: "7609f243-769b-47f3-bc58-9f58e68a00a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.829549 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.829573 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7609f243-769b-47f3-bc58-9f58e68a00a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.829584 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gltrx\" (UniqueName: \"kubernetes.io/projected/7609f243-769b-47f3-bc58-9f58e68a00a2-kube-api-access-gltrx\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.838270 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74944c68c4-mnfbr"] Jan 23 18:21:32 crc kubenswrapper[4760]: E0123 18:21:32.838645 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7609f243-769b-47f3-bc58-9f58e68a00a2" containerName="neutron-db-sync" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.838661 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7609f243-769b-47f3-bc58-9f58e68a00a2" containerName="neutron-db-sync" Jan 23 18:21:32 crc kubenswrapper[4760]: E0123 18:21:32.838673 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35fc4c6e-7c68-44cb-bd4e-fc34214ed151" containerName="keystone-bootstrap" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.838680 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="35fc4c6e-7c68-44cb-bd4e-fc34214ed151" containerName="keystone-bootstrap" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.838846 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7609f243-769b-47f3-bc58-9f58e68a00a2" containerName="neutron-db-sync" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.838860 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="35fc4c6e-7c68-44cb-bd4e-fc34214ed151" containerName="keystone-bootstrap" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.839392 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.843634 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.843661 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.844009 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.844063 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9hcxg" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.844126 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.845117 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74944c68c4-mnfbr"] Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.846081 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.931321 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-internal-tls-certs\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.931503 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-public-tls-certs\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.931596 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-config-data\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.931620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-credential-keys\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.931637 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-combined-ca-bundle\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.931682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-scripts\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.931700 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-fernet-keys\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:32 crc kubenswrapper[4760]: I0123 18:21:32.931950 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8jk\" (UniqueName: \"kubernetes.io/projected/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-kube-api-access-kl8jk\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.033752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl8jk\" (UniqueName: \"kubernetes.io/projected/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-kube-api-access-kl8jk\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.033829 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-internal-tls-certs\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.033890 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-public-tls-certs\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.033937 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-config-data\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.033965 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-credential-keys\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.033986 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-combined-ca-bundle\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.034011 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-fernet-keys\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.034037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-scripts\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.038945 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-config-data\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.038985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-credential-keys\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.045356 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-fernet-keys\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.045923 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-scripts\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.046153 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-internal-tls-certs\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.046548 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-public-tls-certs\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.046588 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-combined-ca-bundle\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.050592 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl8jk\" (UniqueName: \"kubernetes.io/projected/5201f9c2-1e25-4192-8bd6-2e0fb4a5b902-kube-api-access-kl8jk\") pod \"keystone-74944c68c4-mnfbr\" (UID: \"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902\") " pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.188723 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.491017 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66645c546d-bcr2r" event={"ID":"f40c8acd-aef7-4575-bf9b-18a4e220b34b","Type":"ContainerStarted","Data":"86c86ba5cc7ea17e1dbb817c36ecc8d3baf14eddbbe66dbba71d10210300e899"} Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.491352 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.491425 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.514792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qmvlz" event={"ID":"7609f243-769b-47f3-bc58-9f58e68a00a2","Type":"ContainerDied","Data":"25b698ecbf1487af89f0e85b74879680a04d10b5fa435d6a484f17b7c63e0fd1"} Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.514840 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25b698ecbf1487af89f0e85b74879680a04d10b5fa435d6a484f17b7c63e0fd1" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.514860 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qmvlz" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.531858 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-66645c546d-bcr2r" podStartSLOduration=4.5318381070000004 podStartE2EDuration="4.531838107s" podCreationTimestamp="2026-01-23 18:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:33.521900053 +0000 UTC m=+1236.524357996" watchObservedRunningTime="2026-01-23 18:21:33.531838107 +0000 UTC m=+1236.534296040" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.650956 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74944c68c4-mnfbr"] Jan 23 18:21:33 crc kubenswrapper[4760]: W0123 18:21:33.671177 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5201f9c2_1e25_4192_8bd6_2e0fb4a5b902.slice/crio-ecbcedb694acd33c4ca5ba6bbbc3636924a53ae5d6ebf648e722a5e8d402ab53 WatchSource:0}: Error finding container ecbcedb694acd33c4ca5ba6bbbc3636924a53ae5d6ebf648e722a5e8d402ab53: Status 404 returned error can't find the container with id ecbcedb694acd33c4ca5ba6bbbc3636924a53ae5d6ebf648e722a5e8d402ab53 Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.822482 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-kvmhk"] Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.824050 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.849523 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-kvmhk"] Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.854486 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-config\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.854539 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.854600 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-dns-svc\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.854640 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.854670 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s292\" (UniqueName: \"kubernetes.io/projected/d932e4f8-84d4-45d4-bd22-24f2210215e3-kube-api-access-2s292\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.944344 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68d698bbd-sfnql"] Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.946032 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.950299 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.950863 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.950945 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-88m77" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.950952 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.955949 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.956007 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s292\" (UniqueName: \"kubernetes.io/projected/d932e4f8-84d4-45d4-bd22-24f2210215e3-kube-api-access-2s292\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.956069 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-config\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.956106 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.956179 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-dns-svc\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.957233 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.957252 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-dns-svc\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.957756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.957910 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-config\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.959910 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68d698bbd-sfnql"] Jan 23 18:21:33 crc kubenswrapper[4760]: I0123 18:21:33.975160 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s292\" (UniqueName: \"kubernetes.io/projected/d932e4f8-84d4-45d4-bd22-24f2210215e3-kube-api-access-2s292\") pod \"dnsmasq-dns-7b946d459c-kvmhk\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.057906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-config\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.057974 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqfmx\" (UniqueName: \"kubernetes.io/projected/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-kube-api-access-zqfmx\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.058466 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-httpd-config\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.058497 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-combined-ca-bundle\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.058537 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-ovndb-tls-certs\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.146545 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.160337 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-combined-ca-bundle\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.160378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-httpd-config\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.160427 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-ovndb-tls-certs\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.160506 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-config\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.160550 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqfmx\" (UniqueName: \"kubernetes.io/projected/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-kube-api-access-zqfmx\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.165357 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-ovndb-tls-certs\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.166210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-combined-ca-bundle\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.169054 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-httpd-config\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.170084 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-config\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.183673 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqfmx\" (UniqueName: \"kubernetes.io/projected/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-kube-api-access-zqfmx\") pod \"neutron-68d698bbd-sfnql\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.318508 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.528174 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74944c68c4-mnfbr" event={"ID":"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902","Type":"ContainerStarted","Data":"ecbcedb694acd33c4ca5ba6bbbc3636924a53ae5d6ebf648e722a5e8d402ab53"} Jan 23 18:21:34 crc kubenswrapper[4760]: I0123 18:21:34.734488 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-kvmhk"] Jan 23 18:21:35 crc kubenswrapper[4760]: I0123 18:21:35.209148 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68d698bbd-sfnql"] Jan 23 18:21:35 crc kubenswrapper[4760]: W0123 18:21:35.218335 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b9aa0fc_ef7a_49f6_b76a_aeb5b618a16a.slice/crio-af3df36d4a6e3eefabd0a7aa63e9dd837b6d249ffde3902f87ef58e13078fe04 WatchSource:0}: Error finding container af3df36d4a6e3eefabd0a7aa63e9dd837b6d249ffde3902f87ef58e13078fe04: Status 404 returned error can't find the container with id af3df36d4a6e3eefabd0a7aa63e9dd837b6d249ffde3902f87ef58e13078fe04 Jan 23 18:21:35 crc kubenswrapper[4760]: I0123 18:21:35.539071 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74944c68c4-mnfbr" event={"ID":"5201f9c2-1e25-4192-8bd6-2e0fb4a5b902","Type":"ContainerStarted","Data":"9775ab70a0a1689e915481e38e12915cc52f184953ba681d6e563a8d998e0762"} Jan 23 18:21:35 crc kubenswrapper[4760]: I0123 18:21:35.539837 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:21:35 crc kubenswrapper[4760]: I0123 18:21:35.542913 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d698bbd-sfnql" event={"ID":"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a","Type":"ContainerStarted","Data":"2c8c7eef1616f43f0f97bf1f7c2a50971cc464c1d8686eb0dbe7503df29aaee3"} Jan 23 18:21:35 crc kubenswrapper[4760]: I0123 18:21:35.542994 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d698bbd-sfnql" event={"ID":"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a","Type":"ContainerStarted","Data":"af3df36d4a6e3eefabd0a7aa63e9dd837b6d249ffde3902f87ef58e13078fe04"} Jan 23 18:21:35 crc kubenswrapper[4760]: I0123 18:21:35.548051 4760 generic.go:334] "Generic (PLEG): container finished" podID="d932e4f8-84d4-45d4-bd22-24f2210215e3" containerID="c286792d35f039d36a1713772d812c3b4a0504fbc06604a7b2009ec9155462de" exitCode=0 Jan 23 18:21:35 crc kubenswrapper[4760]: I0123 18:21:35.549029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" event={"ID":"d932e4f8-84d4-45d4-bd22-24f2210215e3","Type":"ContainerDied","Data":"c286792d35f039d36a1713772d812c3b4a0504fbc06604a7b2009ec9155462de"} Jan 23 18:21:35 crc kubenswrapper[4760]: I0123 18:21:35.549081 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" event={"ID":"d932e4f8-84d4-45d4-bd22-24f2210215e3","Type":"ContainerStarted","Data":"94a4c8a80bbc67843a17d0d9709ee6594f0ca9d47d76a36086677a685906a5ec"} Jan 23 18:21:35 crc kubenswrapper[4760]: I0123 18:21:35.569851 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-74944c68c4-mnfbr" podStartSLOduration=3.569832722 podStartE2EDuration="3.569832722s" podCreationTimestamp="2026-01-23 18:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:35.564727636 +0000 UTC m=+1238.567185579" watchObservedRunningTime="2026-01-23 18:21:35.569832722 +0000 UTC m=+1238.572290655" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.439011 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-585856f577-q8bpp"] Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.441090 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.443542 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.443622 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.454104 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-585856f577-q8bpp"] Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.512964 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-public-tls-certs\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.513050 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ktkq\" (UniqueName: \"kubernetes.io/projected/789429a2-8a44-4914-b54c-65e7ccaa180c-kube-api-access-9ktkq\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.513073 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-ovndb-tls-certs\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.513098 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-config\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.513125 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-combined-ca-bundle\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.513152 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-internal-tls-certs\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.513190 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-httpd-config\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.564250 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d698bbd-sfnql" event={"ID":"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a","Type":"ContainerStarted","Data":"296eb75f6869d405f821a570e52f26fa90f16b0254871df72af68b267051027b"} Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.564435 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.575075 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" event={"ID":"d932e4f8-84d4-45d4-bd22-24f2210215e3","Type":"ContainerStarted","Data":"3dd42838a2862f6aa54c34e3eea44369491ea55b7a277ff28bc0ad57f3f1421e"} Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.575832 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.593914 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68d698bbd-sfnql" podStartSLOduration=3.593886579 podStartE2EDuration="3.593886579s" podCreationTimestamp="2026-01-23 18:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:36.579423003 +0000 UTC m=+1239.581880956" watchObservedRunningTime="2026-01-23 18:21:36.593886579 +0000 UTC m=+1239.596344522" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.614666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-httpd-config\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.614878 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-public-tls-certs\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.614892 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" podStartSLOduration=3.614880688 podStartE2EDuration="3.614880688s" podCreationTimestamp="2026-01-23 18:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:36.603135835 +0000 UTC m=+1239.605593768" watchObservedRunningTime="2026-01-23 18:21:36.614880688 +0000 UTC m=+1239.617338621" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.615071 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ktkq\" (UniqueName: \"kubernetes.io/projected/789429a2-8a44-4914-b54c-65e7ccaa180c-kube-api-access-9ktkq\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.615122 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-ovndb-tls-certs\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.615192 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-config\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.615274 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-combined-ca-bundle\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.615390 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-internal-tls-certs\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.624907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-ovndb-tls-certs\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.625169 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-public-tls-certs\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.627642 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-httpd-config\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.627935 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-config\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.631506 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-combined-ca-bundle\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.637273 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/789429a2-8a44-4914-b54c-65e7ccaa180c-internal-tls-certs\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.645167 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ktkq\" (UniqueName: \"kubernetes.io/projected/789429a2-8a44-4914-b54c-65e7ccaa180c-kube-api-access-9ktkq\") pod \"neutron-585856f577-q8bpp\" (UID: \"789429a2-8a44-4914-b54c-65e7ccaa180c\") " pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:36 crc kubenswrapper[4760]: I0123 18:21:36.763696 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:41 crc kubenswrapper[4760]: I0123 18:21:41.260631 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-585856f577-q8bpp"] Jan 23 18:21:41 crc kubenswrapper[4760]: W0123 18:21:41.988452 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod789429a2_8a44_4914_b54c_65e7ccaa180c.slice/crio-4e51535d438b9c111a74a954b3e511b6238d551cf8151b988ca259b0acf1842e WatchSource:0}: Error finding container 4e51535d438b9c111a74a954b3e511b6238d551cf8151b988ca259b0acf1842e: Status 404 returned error can't find the container with id 4e51535d438b9c111a74a954b3e511b6238d551cf8151b988ca259b0acf1842e Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.668101 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585856f577-q8bpp" event={"ID":"789429a2-8a44-4914-b54c-65e7ccaa180c","Type":"ContainerStarted","Data":"b5376fee675a12507e68dccbf47cc3a20b1973560c1718c7ab7492c2cdc77cb8"} Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.668490 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585856f577-q8bpp" event={"ID":"789429a2-8a44-4914-b54c-65e7ccaa180c","Type":"ContainerStarted","Data":"d1761db0b9c1381092a5e4c0bb27f5562802e22ad7c68c0dbc38fc506e24e8aa"} Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.668508 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585856f577-q8bpp" event={"ID":"789429a2-8a44-4914-b54c-65e7ccaa180c","Type":"ContainerStarted","Data":"4e51535d438b9c111a74a954b3e511b6238d551cf8151b988ca259b0acf1842e"} Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.668624 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.673796 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3515d0e-53eb-486e-b727-aaac44882bc2","Type":"ContainerStarted","Data":"169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1"} Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.673988 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="ceilometer-central-agent" containerID="cri-o://f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65" gracePeriod=30 Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.674249 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="proxy-httpd" containerID="cri-o://169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1" gracePeriod=30 Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.674452 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="ceilometer-notification-agent" containerID="cri-o://cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df" gracePeriod=30 Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.674501 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="sg-core" containerID="cri-o://0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a" gracePeriod=30 Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.674751 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.680059 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8bqhf" event={"ID":"d74dde90-69ec-49ed-9531-80aaea5a691e","Type":"ContainerStarted","Data":"90f6ce7a56983cf84bb4503964960e913bb4692e4763b92ef7a5038a5024e54f"} Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.689501 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-585856f577-q8bpp" podStartSLOduration=6.688560267 podStartE2EDuration="6.688560267s" podCreationTimestamp="2026-01-23 18:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:42.688162446 +0000 UTC m=+1245.690620369" watchObservedRunningTime="2026-01-23 18:21:42.688560267 +0000 UTC m=+1245.691018200" Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.716112 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.080776906 podStartE2EDuration="48.71609435s" podCreationTimestamp="2026-01-23 18:20:54 +0000 UTC" firstStartedPulling="2026-01-23 18:20:55.393938235 +0000 UTC m=+1198.396396168" lastFinishedPulling="2026-01-23 18:21:42.029255679 +0000 UTC m=+1245.031713612" observedRunningTime="2026-01-23 18:21:42.71608599 +0000 UTC m=+1245.718543923" watchObservedRunningTime="2026-01-23 18:21:42.71609435 +0000 UTC m=+1245.718552273" Jan 23 18:21:42 crc kubenswrapper[4760]: I0123 18:21:42.733932 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-8bqhf" podStartSLOduration=2.2579525990000002 podStartE2EDuration="48.733913305s" podCreationTimestamp="2026-01-23 18:20:54 +0000 UTC" firstStartedPulling="2026-01-23 18:20:55.53366723 +0000 UTC m=+1198.536125153" lastFinishedPulling="2026-01-23 18:21:42.009627926 +0000 UTC m=+1245.012085859" observedRunningTime="2026-01-23 18:21:42.730640208 +0000 UTC m=+1245.733098141" watchObservedRunningTime="2026-01-23 18:21:42.733913305 +0000 UTC m=+1245.736371238" Jan 23 18:21:43 crc kubenswrapper[4760]: I0123 18:21:43.691523 4760 generic.go:334] "Generic (PLEG): container finished" podID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerID="169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1" exitCode=0 Jan 23 18:21:43 crc kubenswrapper[4760]: I0123 18:21:43.691786 4760 generic.go:334] "Generic (PLEG): container finished" podID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerID="0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a" exitCode=2 Jan 23 18:21:43 crc kubenswrapper[4760]: I0123 18:21:43.691794 4760 generic.go:334] "Generic (PLEG): container finished" podID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerID="f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65" exitCode=0 Jan 23 18:21:43 crc kubenswrapper[4760]: I0123 18:21:43.691607 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3515d0e-53eb-486e-b727-aaac44882bc2","Type":"ContainerDied","Data":"169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1"} Jan 23 18:21:43 crc kubenswrapper[4760]: I0123 18:21:43.691848 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3515d0e-53eb-486e-b727-aaac44882bc2","Type":"ContainerDied","Data":"0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a"} Jan 23 18:21:43 crc kubenswrapper[4760]: I0123 18:21:43.691860 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3515d0e-53eb-486e-b727-aaac44882bc2","Type":"ContainerDied","Data":"f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65"} Jan 23 18:21:43 crc kubenswrapper[4760]: I0123 18:21:43.694261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w8mlk" event={"ID":"de1f4885-e30d-4dd2-a80c-8960404fc972","Type":"ContainerStarted","Data":"ba7a508f8719421e30a11d12cda12b9fbd7056e28ec7a884e287669cf5671648"} Jan 23 18:21:43 crc kubenswrapper[4760]: I0123 18:21:43.715165 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-w8mlk" podStartSLOduration=3.101761082 podStartE2EDuration="49.715147411s" podCreationTimestamp="2026-01-23 18:20:54 +0000 UTC" firstStartedPulling="2026-01-23 18:20:55.394428088 +0000 UTC m=+1198.396886021" lastFinishedPulling="2026-01-23 18:21:42.007814407 +0000 UTC m=+1245.010272350" observedRunningTime="2026-01-23 18:21:43.710795175 +0000 UTC m=+1246.713253128" watchObservedRunningTime="2026-01-23 18:21:43.715147411 +0000 UTC m=+1246.717605354" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.148797 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.233246 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ccbhl"] Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.233538 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" podUID="6e186470-bf73-4b3b-94ef-0d915a184042" containerName="dnsmasq-dns" containerID="cri-o://b2552b802d78033173efb318d548d2fa4f6aa3973d7117565c6dfb5a85464dcc" gracePeriod=10 Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.711444 4760 generic.go:334] "Generic (PLEG): container finished" podID="6e186470-bf73-4b3b-94ef-0d915a184042" containerID="b2552b802d78033173efb318d548d2fa4f6aa3973d7117565c6dfb5a85464dcc" exitCode=0 Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.711506 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" event={"ID":"6e186470-bf73-4b3b-94ef-0d915a184042","Type":"ContainerDied","Data":"b2552b802d78033173efb318d548d2fa4f6aa3973d7117565c6dfb5a85464dcc"} Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.801426 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:21:44 crc kubenswrapper[4760]: E0123 18:21:44.847293 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd74dde90_69ec_49ed_9531_80aaea5a691e.slice/crio-conmon-90f6ce7a56983cf84bb4503964960e913bb4692e4763b92ef7a5038a5024e54f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd74dde90_69ec_49ed_9531_80aaea5a691e.slice/crio-90f6ce7a56983cf84bb4503964960e913bb4692e4763b92ef7a5038a5024e54f.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.860494 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-config\") pod \"6e186470-bf73-4b3b-94ef-0d915a184042\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.860799 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-sb\") pod \"6e186470-bf73-4b3b-94ef-0d915a184042\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.860842 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-dns-svc\") pod \"6e186470-bf73-4b3b-94ef-0d915a184042\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.860905 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24q88\" (UniqueName: \"kubernetes.io/projected/6e186470-bf73-4b3b-94ef-0d915a184042-kube-api-access-24q88\") pod \"6e186470-bf73-4b3b-94ef-0d915a184042\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.860930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-nb\") pod \"6e186470-bf73-4b3b-94ef-0d915a184042\" (UID: \"6e186470-bf73-4b3b-94ef-0d915a184042\") " Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.867747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e186470-bf73-4b3b-94ef-0d915a184042-kube-api-access-24q88" (OuterVolumeSpecName: "kube-api-access-24q88") pod "6e186470-bf73-4b3b-94ef-0d915a184042" (UID: "6e186470-bf73-4b3b-94ef-0d915a184042"). InnerVolumeSpecName "kube-api-access-24q88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.907037 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e186470-bf73-4b3b-94ef-0d915a184042" (UID: "6e186470-bf73-4b3b-94ef-0d915a184042"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.917528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-config" (OuterVolumeSpecName: "config") pod "6e186470-bf73-4b3b-94ef-0d915a184042" (UID: "6e186470-bf73-4b3b-94ef-0d915a184042"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.926929 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6e186470-bf73-4b3b-94ef-0d915a184042" (UID: "6e186470-bf73-4b3b-94ef-0d915a184042"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.935481 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6e186470-bf73-4b3b-94ef-0d915a184042" (UID: "6e186470-bf73-4b3b-94ef-0d915a184042"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.962334 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.962366 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.962381 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.962394 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24q88\" (UniqueName: \"kubernetes.io/projected/6e186470-bf73-4b3b-94ef-0d915a184042-kube-api-access-24q88\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:44 crc kubenswrapper[4760]: I0123 18:21:44.962417 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e186470-bf73-4b3b-94ef-0d915a184042-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.265342 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.369589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-scripts\") pod \"b3515d0e-53eb-486e-b727-aaac44882bc2\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.369681 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-run-httpd\") pod \"b3515d0e-53eb-486e-b727-aaac44882bc2\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.369721 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-log-httpd\") pod \"b3515d0e-53eb-486e-b727-aaac44882bc2\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.369779 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-sg-core-conf-yaml\") pod \"b3515d0e-53eb-486e-b727-aaac44882bc2\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.369880 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qghcc\" (UniqueName: \"kubernetes.io/projected/b3515d0e-53eb-486e-b727-aaac44882bc2-kube-api-access-qghcc\") pod \"b3515d0e-53eb-486e-b727-aaac44882bc2\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.369926 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-combined-ca-bundle\") pod \"b3515d0e-53eb-486e-b727-aaac44882bc2\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.370003 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-config-data\") pod \"b3515d0e-53eb-486e-b727-aaac44882bc2\" (UID: \"b3515d0e-53eb-486e-b727-aaac44882bc2\") " Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.370366 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b3515d0e-53eb-486e-b727-aaac44882bc2" (UID: "b3515d0e-53eb-486e-b727-aaac44882bc2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.373161 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b3515d0e-53eb-486e-b727-aaac44882bc2" (UID: "b3515d0e-53eb-486e-b727-aaac44882bc2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.377587 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-scripts" (OuterVolumeSpecName: "scripts") pod "b3515d0e-53eb-486e-b727-aaac44882bc2" (UID: "b3515d0e-53eb-486e-b727-aaac44882bc2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.378626 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3515d0e-53eb-486e-b727-aaac44882bc2-kube-api-access-qghcc" (OuterVolumeSpecName: "kube-api-access-qghcc") pod "b3515d0e-53eb-486e-b727-aaac44882bc2" (UID: "b3515d0e-53eb-486e-b727-aaac44882bc2"). InnerVolumeSpecName "kube-api-access-qghcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.398117 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b3515d0e-53eb-486e-b727-aaac44882bc2" (UID: "b3515d0e-53eb-486e-b727-aaac44882bc2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.448587 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3515d0e-53eb-486e-b727-aaac44882bc2" (UID: "b3515d0e-53eb-486e-b727-aaac44882bc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.463746 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-config-data" (OuterVolumeSpecName: "config-data") pod "b3515d0e-53eb-486e-b727-aaac44882bc2" (UID: "b3515d0e-53eb-486e-b727-aaac44882bc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.472025 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qghcc\" (UniqueName: \"kubernetes.io/projected/b3515d0e-53eb-486e-b727-aaac44882bc2-kube-api-access-qghcc\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.472057 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.472066 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.472074 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.472081 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.472091 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3515d0e-53eb-486e-b727-aaac44882bc2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.472099 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3515d0e-53eb-486e-b727-aaac44882bc2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.721925 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" event={"ID":"6e186470-bf73-4b3b-94ef-0d915a184042","Type":"ContainerDied","Data":"9e3f611db00d144be6d9248b81bf00dd85c92d67a2016ed811d225bf0efbc56a"} Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.721999 4760 scope.go:117] "RemoveContainer" containerID="b2552b802d78033173efb318d548d2fa4f6aa3973d7117565c6dfb5a85464dcc" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.722002 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-ccbhl" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.726106 4760 generic.go:334] "Generic (PLEG): container finished" podID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerID="cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df" exitCode=0 Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.726174 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.726191 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3515d0e-53eb-486e-b727-aaac44882bc2","Type":"ContainerDied","Data":"cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df"} Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.726229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3515d0e-53eb-486e-b727-aaac44882bc2","Type":"ContainerDied","Data":"4c74ec106ff19052f8f16dbffc6dda6d205e10772c32786e5eebdc01d4057bf4"} Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.728613 4760 generic.go:334] "Generic (PLEG): container finished" podID="d74dde90-69ec-49ed-9531-80aaea5a691e" containerID="90f6ce7a56983cf84bb4503964960e913bb4692e4763b92ef7a5038a5024e54f" exitCode=0 Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.728676 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8bqhf" event={"ID":"d74dde90-69ec-49ed-9531-80aaea5a691e","Type":"ContainerDied","Data":"90f6ce7a56983cf84bb4503964960e913bb4692e4763b92ef7a5038a5024e54f"} Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.747033 4760 scope.go:117] "RemoveContainer" containerID="06e35df7886d3b885e0b63e76316ff5b649e7cd60354721fb501b6847bec5658" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.748877 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ccbhl"] Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.756239 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ccbhl"] Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.773084 4760 scope.go:117] "RemoveContainer" containerID="169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.787497 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.799676 4760 scope.go:117] "RemoveContainer" containerID="0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.815735 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.833896 4760 scope.go:117] "RemoveContainer" containerID="cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.842307 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:21:45 crc kubenswrapper[4760]: E0123 18:21:45.842760 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="ceilometer-notification-agent" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.842781 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="ceilometer-notification-agent" Jan 23 18:21:45 crc kubenswrapper[4760]: E0123 18:21:45.842800 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="ceilometer-central-agent" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.842808 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="ceilometer-central-agent" Jan 23 18:21:45 crc kubenswrapper[4760]: E0123 18:21:45.842824 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e186470-bf73-4b3b-94ef-0d915a184042" containerName="dnsmasq-dns" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.842833 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e186470-bf73-4b3b-94ef-0d915a184042" containerName="dnsmasq-dns" Jan 23 18:21:45 crc kubenswrapper[4760]: E0123 18:21:45.842848 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="proxy-httpd" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.842856 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="proxy-httpd" Jan 23 18:21:45 crc kubenswrapper[4760]: E0123 18:21:45.842864 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e186470-bf73-4b3b-94ef-0d915a184042" containerName="init" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.842871 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e186470-bf73-4b3b-94ef-0d915a184042" containerName="init" Jan 23 18:21:45 crc kubenswrapper[4760]: E0123 18:21:45.842888 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="sg-core" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.842895 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="sg-core" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.843096 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="ceilometer-notification-agent" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.843115 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="sg-core" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.843132 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="proxy-httpd" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.843155 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" containerName="ceilometer-central-agent" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.843168 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e186470-bf73-4b3b-94ef-0d915a184042" containerName="dnsmasq-dns" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.844967 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.847236 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.847466 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.853301 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.859616 4760 scope.go:117] "RemoveContainer" containerID="f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.878017 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9x2g\" (UniqueName: \"kubernetes.io/projected/54db138a-d54f-4224-a4eb-00fc0f39ed3c-kube-api-access-p9x2g\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.878076 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.878109 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-log-httpd\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.878146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-run-httpd\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.878467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-config-data\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.878503 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.878535 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-scripts\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.882635 4760 scope.go:117] "RemoveContainer" containerID="169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1" Jan 23 18:21:45 crc kubenswrapper[4760]: E0123 18:21:45.883145 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1\": container with ID starting with 169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1 not found: ID does not exist" containerID="169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.883185 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1"} err="failed to get container status \"169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1\": rpc error: code = NotFound desc = could not find container \"169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1\": container with ID starting with 169f57a1b111798be52b68a38faf05c57aabc0eb393d10a390ede390025275c1 not found: ID does not exist" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.883211 4760 scope.go:117] "RemoveContainer" containerID="0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a" Jan 23 18:21:45 crc kubenswrapper[4760]: E0123 18:21:45.883523 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a\": container with ID starting with 0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a not found: ID does not exist" containerID="0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.883569 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a"} err="failed to get container status \"0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a\": rpc error: code = NotFound desc = could not find container \"0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a\": container with ID starting with 0ca41d2de55d663594c97a812f254f189f388cf422362659b672dc8ee82b819a not found: ID does not exist" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.883588 4760 scope.go:117] "RemoveContainer" containerID="cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df" Jan 23 18:21:45 crc kubenswrapper[4760]: E0123 18:21:45.883848 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df\": container with ID starting with cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df not found: ID does not exist" containerID="cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.883874 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df"} err="failed to get container status \"cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df\": rpc error: code = NotFound desc = could not find container \"cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df\": container with ID starting with cbc2b7e722c1656cff7fa8dd64d1fdd8cefa5c859a57aebc6faabd426e0d58df not found: ID does not exist" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.883891 4760 scope.go:117] "RemoveContainer" containerID="f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65" Jan 23 18:21:45 crc kubenswrapper[4760]: E0123 18:21:45.884168 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65\": container with ID starting with f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65 not found: ID does not exist" containerID="f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.884220 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65"} err="failed to get container status \"f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65\": rpc error: code = NotFound desc = could not find container \"f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65\": container with ID starting with f6e2a4d6d159f7a6345e4162a2fd3fc0be35bbbde6af69607373658438b95b65 not found: ID does not exist" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.979929 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-log-httpd\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.980049 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-run-httpd\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.980113 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-config-data\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.980133 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.980179 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-scripts\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.980226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9x2g\" (UniqueName: \"kubernetes.io/projected/54db138a-d54f-4224-a4eb-00fc0f39ed3c-kube-api-access-p9x2g\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.980269 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.980465 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-log-httpd\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.981478 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-run-httpd\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.984295 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.984771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-config-data\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.984925 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-scripts\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.986868 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:45 crc kubenswrapper[4760]: I0123 18:21:45.997487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9x2g\" (UniqueName: \"kubernetes.io/projected/54db138a-d54f-4224-a4eb-00fc0f39ed3c-kube-api-access-p9x2g\") pod \"ceilometer-0\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " pod="openstack/ceilometer-0" Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.075368 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.075476 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.075525 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.076306 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f55d0f48ab5f20742a6157a2f638d64038b9a8ba0a7914e72dac7dd13e1a1c1"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.076401 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://6f55d0f48ab5f20742a6157a2f638d64038b9a8ba0a7914e72dac7dd13e1a1c1" gracePeriod=600 Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.165906 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.609993 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:21:46 crc kubenswrapper[4760]: W0123 18:21:46.612706 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54db138a_d54f_4224_a4eb_00fc0f39ed3c.slice/crio-f49b4ffa826de18fecb8b056db54b35e5db3622c5d6ccaabfd22fbd658e97c5c WatchSource:0}: Error finding container f49b4ffa826de18fecb8b056db54b35e5db3622c5d6ccaabfd22fbd658e97c5c: Status 404 returned error can't find the container with id f49b4ffa826de18fecb8b056db54b35e5db3622c5d6ccaabfd22fbd658e97c5c Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.738230 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="6f55d0f48ab5f20742a6157a2f638d64038b9a8ba0a7914e72dac7dd13e1a1c1" exitCode=0 Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.738517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"6f55d0f48ab5f20742a6157a2f638d64038b9a8ba0a7914e72dac7dd13e1a1c1"} Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.738565 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"d10a6c9cff1cc06cc9d41f66b241c8a85945eae00b182bb02ef5740c10c61491"} Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.738588 4760 scope.go:117] "RemoveContainer" containerID="fd7531a7445766d1859395c87897c2fd5d7fec89de4fdbffda0e57724c6d100c" Jan 23 18:21:46 crc kubenswrapper[4760]: I0123 18:21:46.740197 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54db138a-d54f-4224-a4eb-00fc0f39ed3c","Type":"ContainerStarted","Data":"f49b4ffa826de18fecb8b056db54b35e5db3622c5d6ccaabfd22fbd658e97c5c"} Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.099185 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.198788 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-db-sync-config-data\") pod \"d74dde90-69ec-49ed-9531-80aaea5a691e\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.198849 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm9m2\" (UniqueName: \"kubernetes.io/projected/d74dde90-69ec-49ed-9531-80aaea5a691e-kube-api-access-mm9m2\") pod \"d74dde90-69ec-49ed-9531-80aaea5a691e\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.198977 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-combined-ca-bundle\") pod \"d74dde90-69ec-49ed-9531-80aaea5a691e\" (UID: \"d74dde90-69ec-49ed-9531-80aaea5a691e\") " Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.204631 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d74dde90-69ec-49ed-9531-80aaea5a691e" (UID: "d74dde90-69ec-49ed-9531-80aaea5a691e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.204801 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d74dde90-69ec-49ed-9531-80aaea5a691e-kube-api-access-mm9m2" (OuterVolumeSpecName: "kube-api-access-mm9m2") pod "d74dde90-69ec-49ed-9531-80aaea5a691e" (UID: "d74dde90-69ec-49ed-9531-80aaea5a691e"). InnerVolumeSpecName "kube-api-access-mm9m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.224742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d74dde90-69ec-49ed-9531-80aaea5a691e" (UID: "d74dde90-69ec-49ed-9531-80aaea5a691e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.301045 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.301372 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm9m2\" (UniqueName: \"kubernetes.io/projected/d74dde90-69ec-49ed-9531-80aaea5a691e-kube-api-access-mm9m2\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.301388 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74dde90-69ec-49ed-9531-80aaea5a691e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.615391 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e186470-bf73-4b3b-94ef-0d915a184042" path="/var/lib/kubelet/pods/6e186470-bf73-4b3b-94ef-0d915a184042/volumes" Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.616740 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3515d0e-53eb-486e-b727-aaac44882bc2" path="/var/lib/kubelet/pods/b3515d0e-53eb-486e-b727-aaac44882bc2/volumes" Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.755206 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54db138a-d54f-4224-a4eb-00fc0f39ed3c","Type":"ContainerStarted","Data":"cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a"} Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.756879 4760 generic.go:334] "Generic (PLEG): container finished" podID="de1f4885-e30d-4dd2-a80c-8960404fc972" containerID="ba7a508f8719421e30a11d12cda12b9fbd7056e28ec7a884e287669cf5671648" exitCode=0 Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.756939 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w8mlk" event={"ID":"de1f4885-e30d-4dd2-a80c-8960404fc972","Type":"ContainerDied","Data":"ba7a508f8719421e30a11d12cda12b9fbd7056e28ec7a884e287669cf5671648"} Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.763861 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8bqhf" event={"ID":"d74dde90-69ec-49ed-9531-80aaea5a691e","Type":"ContainerDied","Data":"ce9d8c6f9f7f4f299fec02fa99c5dd52504a5251201adaefa2bc55b3d0523312"} Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.763902 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce9d8c6f9f7f4f299fec02fa99c5dd52504a5251201adaefa2bc55b3d0523312" Jan 23 18:21:47 crc kubenswrapper[4760]: I0123 18:21:47.763915 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8bqhf" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.062758 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-86b58c4bfd-xlh2d"] Jan 23 18:21:48 crc kubenswrapper[4760]: E0123 18:21:48.068552 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d74dde90-69ec-49ed-9531-80aaea5a691e" containerName="barbican-db-sync" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.068591 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d74dde90-69ec-49ed-9531-80aaea5a691e" containerName="barbican-db-sync" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.069185 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d74dde90-69ec-49ed-9531-80aaea5a691e" containerName="barbican-db-sync" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.079816 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.088724 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.109630 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-696f8c69dc-tcwp6"] Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.112739 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.116588 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.116820 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9dcp9" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.124300 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.174077 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86b58c4bfd-xlh2d"] Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.201471 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-696f8c69dc-tcwp6"] Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.216481 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-nbtzd"] Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.218394 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.230032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-config-data-custom\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.230089 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt9xs\" (UniqueName: \"kubernetes.io/projected/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-kube-api-access-mt9xs\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.230121 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-combined-ca-bundle\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.230158 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99b9g\" (UniqueName: \"kubernetes.io/projected/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-kube-api-access-99b9g\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.230185 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-config-data\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.230223 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-logs\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.230278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-logs\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.230299 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-config-data-custom\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.230319 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-combined-ca-bundle\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.230667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-config-data\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.259466 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-nbtzd"] Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.332769 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-config-data\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.332829 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-config\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.332889 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-config-data-custom\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.332928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt9xs\" (UniqueName: \"kubernetes.io/projected/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-kube-api-access-mt9xs\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.332964 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-dns-svc\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.332982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-combined-ca-bundle\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.333035 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99b9g\" (UniqueName: \"kubernetes.io/projected/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-kube-api-access-99b9g\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.333063 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-config-data\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.333120 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-logs\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.333143 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.333191 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk2cg\" (UniqueName: \"kubernetes.io/projected/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-kube-api-access-tk2cg\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.333214 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.333254 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-logs\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.333279 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-config-data-custom\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.333300 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-combined-ca-bundle\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.341113 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-logs\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.341123 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-logs\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.344275 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-combined-ca-bundle\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.350755 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-combined-ca-bundle\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.351625 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-config-data-custom\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.365457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-config-data-custom\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.366494 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-config-data\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.366586 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-config-data\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.371471 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt9xs\" (UniqueName: \"kubernetes.io/projected/e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5-kube-api-access-mt9xs\") pod \"barbican-worker-696f8c69dc-tcwp6\" (UID: \"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5\") " pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.377763 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99b9g\" (UniqueName: \"kubernetes.io/projected/f5e3da3f-c7fe-4735-a284-f35a50c46d2b-kube-api-access-99b9g\") pod \"barbican-keystone-listener-86b58c4bfd-xlh2d\" (UID: \"f5e3da3f-c7fe-4735-a284-f35a50c46d2b\") " pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.443821 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.448115 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-dns-svc\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.449102 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-dns-svc\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.450529 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.450600 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk2cg\" (UniqueName: \"kubernetes.io/projected/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-kube-api-access-tk2cg\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.450642 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.450760 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-config\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.452199 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.453351 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.467706 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-config\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.469893 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c779fd6b6-249z2"] Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.472095 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.472815 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-696f8c69dc-tcwp6" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.474005 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.491202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk2cg\" (UniqueName: \"kubernetes.io/projected/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-kube-api-access-tk2cg\") pod \"dnsmasq-dns-6bb684768f-nbtzd\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.535336 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c779fd6b6-249z2"] Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.538100 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.552510 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpgxw\" (UniqueName: \"kubernetes.io/projected/05cc414f-5108-4436-9a14-354c8575b38e-kube-api-access-xpgxw\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.552642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.552715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-combined-ca-bundle\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.552793 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data-custom\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.552880 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05cc414f-5108-4436-9a14-354c8575b38e-logs\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.653905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05cc414f-5108-4436-9a14-354c8575b38e-logs\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.654219 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpgxw\" (UniqueName: \"kubernetes.io/projected/05cc414f-5108-4436-9a14-354c8575b38e-kube-api-access-xpgxw\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.654268 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.654289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-combined-ca-bundle\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.654325 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data-custom\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.654324 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05cc414f-5108-4436-9a14-354c8575b38e-logs\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.669390 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-combined-ca-bundle\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.684999 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.685209 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data-custom\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.688642 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpgxw\" (UniqueName: \"kubernetes.io/projected/05cc414f-5108-4436-9a14-354c8575b38e-kube-api-access-xpgxw\") pod \"barbican-api-6c779fd6b6-249z2\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:48 crc kubenswrapper[4760]: I0123 18:21:48.965521 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.142668 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86b58c4bfd-xlh2d"] Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.158124 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-696f8c69dc-tcwp6"] Jan 23 18:21:49 crc kubenswrapper[4760]: W0123 18:21:49.169399 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8cc68fb_ebfd_4d1c_80a3_7e5826d7f2f5.slice/crio-6cfe9dcff7ea8457915cf14ed831304dd13dff2bdc409b240c163e507fa0c9c8 WatchSource:0}: Error finding container 6cfe9dcff7ea8457915cf14ed831304dd13dff2bdc409b240c163e507fa0c9c8: Status 404 returned error can't find the container with id 6cfe9dcff7ea8457915cf14ed831304dd13dff2bdc409b240c163e507fa0c9c8 Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.262041 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.320703 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-nbtzd"] Jan 23 18:21:49 crc kubenswrapper[4760]: W0123 18:21:49.324474 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb897b7c_9920_417c_a4d7_9090bc6ab1bc.slice/crio-b5dde3fea4a974d2ce433b4d94a0d1d296df6c3a83280ce82402272dc0a97f63 WatchSource:0}: Error finding container b5dde3fea4a974d2ce433b4d94a0d1d296df6c3a83280ce82402272dc0a97f63: Status 404 returned error can't find the container with id b5dde3fea4a974d2ce433b4d94a0d1d296df6c3a83280ce82402272dc0a97f63 Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.373772 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnn7t\" (UniqueName: \"kubernetes.io/projected/de1f4885-e30d-4dd2-a80c-8960404fc972-kube-api-access-fnn7t\") pod \"de1f4885-e30d-4dd2-a80c-8960404fc972\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.373982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-config-data\") pod \"de1f4885-e30d-4dd2-a80c-8960404fc972\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.374133 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-scripts\") pod \"de1f4885-e30d-4dd2-a80c-8960404fc972\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.374233 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-db-sync-config-data\") pod \"de1f4885-e30d-4dd2-a80c-8960404fc972\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.374429 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de1f4885-e30d-4dd2-a80c-8960404fc972-etc-machine-id\") pod \"de1f4885-e30d-4dd2-a80c-8960404fc972\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.374558 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-combined-ca-bundle\") pod \"de1f4885-e30d-4dd2-a80c-8960404fc972\" (UID: \"de1f4885-e30d-4dd2-a80c-8960404fc972\") " Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.374638 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1f4885-e30d-4dd2-a80c-8960404fc972-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "de1f4885-e30d-4dd2-a80c-8960404fc972" (UID: "de1f4885-e30d-4dd2-a80c-8960404fc972"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.375329 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de1f4885-e30d-4dd2-a80c-8960404fc972-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.379073 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-scripts" (OuterVolumeSpecName: "scripts") pod "de1f4885-e30d-4dd2-a80c-8960404fc972" (UID: "de1f4885-e30d-4dd2-a80c-8960404fc972"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.379826 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "de1f4885-e30d-4dd2-a80c-8960404fc972" (UID: "de1f4885-e30d-4dd2-a80c-8960404fc972"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.380869 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1f4885-e30d-4dd2-a80c-8960404fc972-kube-api-access-fnn7t" (OuterVolumeSpecName: "kube-api-access-fnn7t") pod "de1f4885-e30d-4dd2-a80c-8960404fc972" (UID: "de1f4885-e30d-4dd2-a80c-8960404fc972"). InnerVolumeSpecName "kube-api-access-fnn7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.429283 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de1f4885-e30d-4dd2-a80c-8960404fc972" (UID: "de1f4885-e30d-4dd2-a80c-8960404fc972"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.430713 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-config-data" (OuterVolumeSpecName: "config-data") pod "de1f4885-e30d-4dd2-a80c-8960404fc972" (UID: "de1f4885-e30d-4dd2-a80c-8960404fc972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.477033 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.477102 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.477119 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.477134 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1f4885-e30d-4dd2-a80c-8960404fc972-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.477147 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnn7t\" (UniqueName: \"kubernetes.io/projected/de1f4885-e30d-4dd2-a80c-8960404fc972-kube-api-access-fnn7t\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.544849 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c779fd6b6-249z2"] Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.792874 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-696f8c69dc-tcwp6" event={"ID":"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5","Type":"ContainerStarted","Data":"6cfe9dcff7ea8457915cf14ed831304dd13dff2bdc409b240c163e507fa0c9c8"} Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.818835 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54db138a-d54f-4224-a4eb-00fc0f39ed3c","Type":"ContainerStarted","Data":"70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d"} Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.818882 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54db138a-d54f-4224-a4eb-00fc0f39ed3c","Type":"ContainerStarted","Data":"015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638"} Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.821855 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c779fd6b6-249z2" event={"ID":"05cc414f-5108-4436-9a14-354c8575b38e","Type":"ContainerStarted","Data":"81409b94ce797909a15fdf8bf318d17ea854452fca65af6ba64333ccd2d4e289"} Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.821898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c779fd6b6-249z2" event={"ID":"05cc414f-5108-4436-9a14-354c8575b38e","Type":"ContainerStarted","Data":"d185ebc185331be0e38dc1a53774d2756bbfb32880ccacc51b394e73bde2a865"} Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.824103 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" event={"ID":"f5e3da3f-c7fe-4735-a284-f35a50c46d2b","Type":"ContainerStarted","Data":"fe3c6af956d003ee364f492363089275fee39a5bcd8370c14f9844ef0a1db3ec"} Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.828716 4760 generic.go:334] "Generic (PLEG): container finished" podID="fb897b7c-9920-417c-a4d7-9090bc6ab1bc" containerID="0bd69b59fe92fcc7bb62bd5fb2b1c5718d7612c5076080fba03681022d63fcd5" exitCode=0 Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.828777 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" event={"ID":"fb897b7c-9920-417c-a4d7-9090bc6ab1bc","Type":"ContainerDied","Data":"0bd69b59fe92fcc7bb62bd5fb2b1c5718d7612c5076080fba03681022d63fcd5"} Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.828837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" event={"ID":"fb897b7c-9920-417c-a4d7-9090bc6ab1bc","Type":"ContainerStarted","Data":"b5dde3fea4a974d2ce433b4d94a0d1d296df6c3a83280ce82402272dc0a97f63"} Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.837362 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w8mlk" event={"ID":"de1f4885-e30d-4dd2-a80c-8960404fc972","Type":"ContainerDied","Data":"dd758ac5e64f0f3a0ae4598adda05a098519a2c8e16c5219d4dd93170041495b"} Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.837657 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd758ac5e64f0f3a0ae4598adda05a098519a2c8e16c5219d4dd93170041495b" Jan 23 18:21:49 crc kubenswrapper[4760]: I0123 18:21:49.837948 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w8mlk" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.066353 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:21:50 crc kubenswrapper[4760]: E0123 18:21:50.067193 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1f4885-e30d-4dd2-a80c-8960404fc972" containerName="cinder-db-sync" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.067214 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1f4885-e30d-4dd2-a80c-8960404fc972" containerName="cinder-db-sync" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.069749 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1f4885-e30d-4dd2-a80c-8960404fc972" containerName="cinder-db-sync" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.083114 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.090570 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.091198 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.091548 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x5b9j" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.092518 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.097343 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-nbtzd"] Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.141363 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.179724 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-pd7mq"] Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.181136 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.206178 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.206334 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.206400 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4f8\" (UniqueName: \"kubernetes.io/projected/5e121355-eb81-4ec0-8ed2-86f5a33bb400-kube-api-access-7w4f8\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.206446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.206469 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-scripts\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.206504 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e121355-eb81-4ec0-8ed2-86f5a33bb400-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.215888 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-pd7mq"] Jan 23 18:21:50 crc kubenswrapper[4760]: E0123 18:21:50.295393 4760 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 23 18:21:50 crc kubenswrapper[4760]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/fb897b7c-9920-417c-a4d7-9090bc6ab1bc/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 23 18:21:50 crc kubenswrapper[4760]: > podSandboxID="b5dde3fea4a974d2ce433b4d94a0d1d296df6c3a83280ce82402272dc0a97f63" Jan 23 18:21:50 crc kubenswrapper[4760]: E0123 18:21:50.295643 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 23 18:21:50 crc kubenswrapper[4760]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66bh5ddhdch665h558hcfh59chd6h579h5f4h75h67h5bh556hbh66fh574h5d8hf5hcch64ch649h5fdh86h85h5f8h699h64bh5c9h54ch68bh5c5q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tk2cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6bb684768f-nbtzd_openstack(fb897b7c-9920-417c-a4d7-9090bc6ab1bc): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/fb897b7c-9920-417c-a4d7-9090bc6ab1bc/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 23 18:21:50 crc kubenswrapper[4760]: > logger="UnhandledError" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.295886 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.299155 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: E0123 18:21:50.300310 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/fb897b7c-9920-417c-a4d7-9090bc6ab1bc/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" podUID="fb897b7c-9920-417c-a4d7-9090bc6ab1bc" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.312597 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.313706 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e121355-eb81-4ec0-8ed2-86f5a33bb400-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.313746 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-config\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.313778 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.313813 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grtgl\" (UniqueName: \"kubernetes.io/projected/7c36e36b-cebd-42fa-83ed-3f5ac012865f-kube-api-access-grtgl\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.313849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.313874 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.313924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4f8\" (UniqueName: \"kubernetes.io/projected/5e121355-eb81-4ec0-8ed2-86f5a33bb400-kube-api-access-7w4f8\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.313940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.313959 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-scripts\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.313981 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.314003 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.314089 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e121355-eb81-4ec0-8ed2-86f5a33bb400-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.324018 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-scripts\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.324540 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.327939 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.331130 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.339669 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.343121 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4f8\" (UniqueName: \"kubernetes.io/projected/5e121355-eb81-4ec0-8ed2-86f5a33bb400-kube-api-access-7w4f8\") pod \"cinder-scheduler-0\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415154 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grtgl\" (UniqueName: \"kubernetes.io/projected/7c36e36b-cebd-42fa-83ed-3f5ac012865f-kube-api-access-grtgl\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415237 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415298 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415361 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415387 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7mbc\" (UniqueName: \"kubernetes.io/projected/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-kube-api-access-q7mbc\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415433 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415455 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-logs\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415480 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data-custom\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-scripts\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415527 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415554 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.415582 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-config\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.416764 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-config\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.417570 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.417876 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.418309 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.443581 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grtgl\" (UniqueName: \"kubernetes.io/projected/7c36e36b-cebd-42fa-83ed-3f5ac012865f-kube-api-access-grtgl\") pod \"dnsmasq-dns-6d97fcdd8f-pd7mq\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.462894 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.520081 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.520187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.520211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7mbc\" (UniqueName: \"kubernetes.io/projected/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-kube-api-access-q7mbc\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.520232 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-logs\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.520254 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data-custom\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.520280 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-scripts\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.520303 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.520449 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.521832 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-logs\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.525392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.532189 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data-custom\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.532991 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.534880 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.536597 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-scripts\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.552727 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7mbc\" (UniqueName: \"kubernetes.io/projected/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-kube-api-access-q7mbc\") pod \"cinder-api-0\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.662152 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.905668 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c779fd6b6-249z2" event={"ID":"05cc414f-5108-4436-9a14-354c8575b38e","Type":"ContainerStarted","Data":"547b06644752fbe2e2f150c484d7541b2d6d7f2360d2346b6a20b0a2cbd86b5a"} Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.905973 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.906285 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:21:50 crc kubenswrapper[4760]: I0123 18:21:50.948258 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c779fd6b6-249z2" podStartSLOduration=2.948242184 podStartE2EDuration="2.948242184s" podCreationTimestamp="2026-01-23 18:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:50.947665078 +0000 UTC m=+1253.950123011" watchObservedRunningTime="2026-01-23 18:21:50.948242184 +0000 UTC m=+1253.950700117" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.163645 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-pd7mq"] Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.359029 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.474422 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.655652 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.748195 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-nb\") pod \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.748371 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk2cg\" (UniqueName: \"kubernetes.io/projected/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-kube-api-access-tk2cg\") pod \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.748425 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-dns-svc\") pod \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.748526 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-config\") pod \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.748593 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-sb\") pod \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\" (UID: \"fb897b7c-9920-417c-a4d7-9090bc6ab1bc\") " Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.755882 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-kube-api-access-tk2cg" (OuterVolumeSpecName: "kube-api-access-tk2cg") pod "fb897b7c-9920-417c-a4d7-9090bc6ab1bc" (UID: "fb897b7c-9920-417c-a4d7-9090bc6ab1bc"). InnerVolumeSpecName "kube-api-access-tk2cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.792237 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-config" (OuterVolumeSpecName: "config") pod "fb897b7c-9920-417c-a4d7-9090bc6ab1bc" (UID: "fb897b7c-9920-417c-a4d7-9090bc6ab1bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.800351 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb897b7c-9920-417c-a4d7-9090bc6ab1bc" (UID: "fb897b7c-9920-417c-a4d7-9090bc6ab1bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.803761 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fb897b7c-9920-417c-a4d7-9090bc6ab1bc" (UID: "fb897b7c-9920-417c-a4d7-9090bc6ab1bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.828131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fb897b7c-9920-417c-a4d7-9090bc6ab1bc" (UID: "fb897b7c-9920-417c-a4d7-9090bc6ab1bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.851677 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk2cg\" (UniqueName: \"kubernetes.io/projected/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-kube-api-access-tk2cg\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.851742 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.851756 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.851772 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.851783 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb897b7c-9920-417c-a4d7-9090bc6ab1bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.915043 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea","Type":"ContainerStarted","Data":"9ccd07837116faf05c9882550ce87e29bdc70a86cbd4ee6d600535ffb05e30a5"} Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.916269 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e121355-eb81-4ec0-8ed2-86f5a33bb400","Type":"ContainerStarted","Data":"914588b45a88b316d90befc91560bd1a1a3ddb79041d745ff2faaede87b452f2"} Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.925417 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54db138a-d54f-4224-a4eb-00fc0f39ed3c","Type":"ContainerStarted","Data":"96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743"} Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.925592 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.928892 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" event={"ID":"7c36e36b-cebd-42fa-83ed-3f5ac012865f","Type":"ContainerStarted","Data":"2970c94fb4ddf7106eac5d76c47a2c2dca0a252e0948c2987fe60d98375ff92f"} Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.930515 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.930482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-nbtzd" event={"ID":"fb897b7c-9920-417c-a4d7-9090bc6ab1bc","Type":"ContainerDied","Data":"b5dde3fea4a974d2ce433b4d94a0d1d296df6c3a83280ce82402272dc0a97f63"} Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.930671 4760 scope.go:117] "RemoveContainer" containerID="0bd69b59fe92fcc7bb62bd5fb2b1c5718d7612c5076080fba03681022d63fcd5" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.948685 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.522474531 podStartE2EDuration="6.94863641s" podCreationTimestamp="2026-01-23 18:21:45 +0000 UTC" firstStartedPulling="2026-01-23 18:21:46.617745504 +0000 UTC m=+1249.620203437" lastFinishedPulling="2026-01-23 18:21:51.043907383 +0000 UTC m=+1254.046365316" observedRunningTime="2026-01-23 18:21:51.941723625 +0000 UTC m=+1254.944181558" watchObservedRunningTime="2026-01-23 18:21:51.94863641 +0000 UTC m=+1254.951094343" Jan 23 18:21:51 crc kubenswrapper[4760]: I0123 18:21:51.993123 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-nbtzd"] Jan 23 18:21:52 crc kubenswrapper[4760]: I0123 18:21:52.002530 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-nbtzd"] Jan 23 18:21:52 crc kubenswrapper[4760]: I0123 18:21:52.944333 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" event={"ID":"f5e3da3f-c7fe-4735-a284-f35a50c46d2b","Type":"ContainerStarted","Data":"a4456ece0500041c6cd497283649e8708d4c95bb52670ca229a4686e87dffbb5"} Jan 23 18:21:52 crc kubenswrapper[4760]: I0123 18:21:52.947058 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c36e36b-cebd-42fa-83ed-3f5ac012865f" containerID="ddaa1b2a77f1f408c8c76365ed1853b75ae0add8cc62f9ee1a4b1d75568fb9dc" exitCode=0 Jan 23 18:21:52 crc kubenswrapper[4760]: I0123 18:21:52.949768 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" event={"ID":"7c36e36b-cebd-42fa-83ed-3f5ac012865f","Type":"ContainerDied","Data":"ddaa1b2a77f1f408c8c76365ed1853b75ae0add8cc62f9ee1a4b1d75568fb9dc"} Jan 23 18:21:52 crc kubenswrapper[4760]: I0123 18:21:52.957760 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-696f8c69dc-tcwp6" event={"ID":"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5","Type":"ContainerStarted","Data":"29ef81a21deb04d920bda6eeb829f32259319766df929aeb8a684c2b020eec0d"} Jan 23 18:21:52 crc kubenswrapper[4760]: I0123 18:21:52.957798 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-696f8c69dc-tcwp6" event={"ID":"e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5","Type":"ContainerStarted","Data":"320dcae6e8a2d9ab545056cdf2a3c5ef7cf489afed9b00f2b666fc4d3789492d"} Jan 23 18:21:52 crc kubenswrapper[4760]: I0123 18:21:52.990949 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-696f8c69dc-tcwp6" podStartSLOduration=1.8289420189999999 podStartE2EDuration="4.990928143s" podCreationTimestamp="2026-01-23 18:21:48 +0000 UTC" firstStartedPulling="2026-01-23 18:21:49.173102303 +0000 UTC m=+1252.175560236" lastFinishedPulling="2026-01-23 18:21:52.335088427 +0000 UTC m=+1255.337546360" observedRunningTime="2026-01-23 18:21:52.987686027 +0000 UTC m=+1255.990143960" watchObservedRunningTime="2026-01-23 18:21:52.990928143 +0000 UTC m=+1255.993386086" Jan 23 18:21:53 crc kubenswrapper[4760]: I0123 18:21:53.620894 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb897b7c-9920-417c-a4d7-9090bc6ab1bc" path="/var/lib/kubelet/pods/fb897b7c-9920-417c-a4d7-9090bc6ab1bc/volumes" Jan 23 18:21:53 crc kubenswrapper[4760]: I0123 18:21:53.677442 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:21:53 crc kubenswrapper[4760]: I0123 18:21:53.967575 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" event={"ID":"f5e3da3f-c7fe-4735-a284-f35a50c46d2b","Type":"ContainerStarted","Data":"ff109a0502f7afe60f6c25da87d168bff7a7c999c3eb50f79e0f8f71d99646a3"} Jan 23 18:21:53 crc kubenswrapper[4760]: I0123 18:21:53.969639 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" event={"ID":"7c36e36b-cebd-42fa-83ed-3f5ac012865f","Type":"ContainerStarted","Data":"8f478a31bac11c6cb22bb00cc51de5825c9d600ab76d78645f526ce0cb72440e"} Jan 23 18:21:53 crc kubenswrapper[4760]: I0123 18:21:53.969875 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:21:53 crc kubenswrapper[4760]: I0123 18:21:53.971083 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea","Type":"ContainerStarted","Data":"4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759"} Jan 23 18:21:53 crc kubenswrapper[4760]: I0123 18:21:53.972901 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e121355-eb81-4ec0-8ed2-86f5a33bb400","Type":"ContainerStarted","Data":"d048d655f0082c7e5bec8e1bb0f2239b7014f363e9bfdc1d271ebe661d93dc78"} Jan 23 18:21:53 crc kubenswrapper[4760]: I0123 18:21:53.987111 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-86b58c4bfd-xlh2d" podStartSLOduration=2.856484327 podStartE2EDuration="5.987089976s" podCreationTimestamp="2026-01-23 18:21:48 +0000 UTC" firstStartedPulling="2026-01-23 18:21:49.190333942 +0000 UTC m=+1252.192791865" lastFinishedPulling="2026-01-23 18:21:52.320939581 +0000 UTC m=+1255.323397514" observedRunningTime="2026-01-23 18:21:53.985678729 +0000 UTC m=+1256.988136662" watchObservedRunningTime="2026-01-23 18:21:53.987089976 +0000 UTC m=+1256.989547909" Jan 23 18:21:54 crc kubenswrapper[4760]: I0123 18:21:54.021824 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" podStartSLOduration=4.021805951 podStartE2EDuration="4.021805951s" podCreationTimestamp="2026-01-23 18:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:54.013944332 +0000 UTC m=+1257.016402275" watchObservedRunningTime="2026-01-23 18:21:54.021805951 +0000 UTC m=+1257.024263884" Jan 23 18:21:54 crc kubenswrapper[4760]: I0123 18:21:54.986976 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea","Type":"ContainerStarted","Data":"ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e"} Jan 23 18:21:54 crc kubenswrapper[4760]: I0123 18:21:54.987089 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 18:21:54 crc kubenswrapper[4760]: I0123 18:21:54.987101 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" containerName="cinder-api-log" containerID="cri-o://4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759" gracePeriod=30 Jan 23 18:21:54 crc kubenswrapper[4760]: I0123 18:21:54.987108 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" containerName="cinder-api" containerID="cri-o://ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e" gracePeriod=30 Jan 23 18:21:54 crc kubenswrapper[4760]: I0123 18:21:54.995311 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e121355-eb81-4ec0-8ed2-86f5a33bb400","Type":"ContainerStarted","Data":"15b45f4bf0c16f7e96c784adb5fcfd6a61da3c6e667f15ed0ec7e20d287c5a25"} Jan 23 18:21:54 crc kubenswrapper[4760]: I0123 18:21:54.997866 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-664c7d54bb-bxtwt"] Jan 23 18:21:54 crc kubenswrapper[4760]: E0123 18:21:54.998594 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb897b7c-9920-417c-a4d7-9090bc6ab1bc" containerName="init" Jan 23 18:21:54 crc kubenswrapper[4760]: I0123 18:21:54.998619 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb897b7c-9920-417c-a4d7-9090bc6ab1bc" containerName="init" Jan 23 18:21:54 crc kubenswrapper[4760]: I0123 18:21:54.998905 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb897b7c-9920-417c-a4d7-9090bc6ab1bc" containerName="init" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.026672 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.032277 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.033152 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.064550 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-664c7d54bb-bxtwt"] Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.081271 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.081248581 podStartE2EDuration="5.081248581s" podCreationTimestamp="2026-01-23 18:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:55.028917556 +0000 UTC m=+1258.031375489" watchObservedRunningTime="2026-01-23 18:21:55.081248581 +0000 UTC m=+1258.083706514" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.127394 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.18693758 podStartE2EDuration="5.127368529s" podCreationTimestamp="2026-01-23 18:21:50 +0000 UTC" firstStartedPulling="2026-01-23 18:21:51.576953876 +0000 UTC m=+1254.579411809" lastFinishedPulling="2026-01-23 18:21:52.517384825 +0000 UTC m=+1255.519842758" observedRunningTime="2026-01-23 18:21:55.079002301 +0000 UTC m=+1258.081460234" watchObservedRunningTime="2026-01-23 18:21:55.127368529 +0000 UTC m=+1258.129826482" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.142209 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-public-tls-certs\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.142253 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-combined-ca-bundle\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.142275 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-logs\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.142307 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-internal-tls-certs\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.142330 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdt8r\" (UniqueName: \"kubernetes.io/projected/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-kube-api-access-sdt8r\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.142380 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-config-data-custom\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.142554 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-config-data\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: E0123 18:21:55.171035 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f4d3219_73ff_4b38_8d0c_00f17dfe41ea.slice/crio-4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.244160 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-config-data-custom\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.244235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-config-data\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.244395 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-public-tls-certs\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.244441 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-combined-ca-bundle\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.244457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-logs\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.244483 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-internal-tls-certs\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.245276 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-logs\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.245965 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdt8r\" (UniqueName: \"kubernetes.io/projected/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-kube-api-access-sdt8r\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.253294 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-combined-ca-bundle\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.254285 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-public-tls-certs\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.256095 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-config-data\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.257776 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-config-data-custom\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.261663 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-internal-tls-certs\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.265625 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdt8r\" (UniqueName: \"kubernetes.io/projected/78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986-kube-api-access-sdt8r\") pod \"barbican-api-664c7d54bb-bxtwt\" (UID: \"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986\") " pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.392946 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.464658 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.931450 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-664c7d54bb-bxtwt"] Jan 23 18:21:55 crc kubenswrapper[4760]: I0123 18:21:55.947427 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.012375 4760 generic.go:334] "Generic (PLEG): container finished" podID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" containerID="ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e" exitCode=0 Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.012447 4760 generic.go:334] "Generic (PLEG): container finished" podID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" containerID="4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759" exitCode=143 Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.012492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea","Type":"ContainerDied","Data":"ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e"} Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.012520 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea","Type":"ContainerDied","Data":"4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759"} Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.012530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea","Type":"ContainerDied","Data":"9ccd07837116faf05c9882550ce87e29bdc70a86cbd4ee6d600535ffb05e30a5"} Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.012544 4760 scope.go:117] "RemoveContainer" containerID="ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.012692 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.015388 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-664c7d54bb-bxtwt" event={"ID":"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986","Type":"ContainerStarted","Data":"3c7846e887706a056ea3f2128f8d34bc7c491de2ef590d8e6ac5c0799830b85b"} Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.045921 4760 scope.go:117] "RemoveContainer" containerID="4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.061613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-scripts\") pod \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.061740 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-logs\") pod \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.061772 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-combined-ca-bundle\") pod \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.061822 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data\") pod \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.061859 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data-custom\") pod \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.061892 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7mbc\" (UniqueName: \"kubernetes.io/projected/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-kube-api-access-q7mbc\") pod \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.061960 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-etc-machine-id\") pod \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\" (UID: \"0f4d3219-73ff-4b38-8d0c-00f17dfe41ea\") " Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.062282 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" (UID: "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.063957 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-logs" (OuterVolumeSpecName: "logs") pod "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" (UID: "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.069742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-kube-api-access-q7mbc" (OuterVolumeSpecName: "kube-api-access-q7mbc") pod "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" (UID: "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea"). InnerVolumeSpecName "kube-api-access-q7mbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.071327 4760 scope.go:117] "RemoveContainer" containerID="ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e" Jan 23 18:21:56 crc kubenswrapper[4760]: E0123 18:21:56.072229 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e\": container with ID starting with ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e not found: ID does not exist" containerID="ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.072280 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e"} err="failed to get container status \"ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e\": rpc error: code = NotFound desc = could not find container \"ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e\": container with ID starting with ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e not found: ID does not exist" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.072321 4760 scope.go:117] "RemoveContainer" containerID="4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759" Jan 23 18:21:56 crc kubenswrapper[4760]: E0123 18:21:56.073032 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759\": container with ID starting with 4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759 not found: ID does not exist" containerID="4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.073061 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759"} err="failed to get container status \"4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759\": rpc error: code = NotFound desc = could not find container \"4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759\": container with ID starting with 4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759 not found: ID does not exist" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.073078 4760 scope.go:117] "RemoveContainer" containerID="ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.073314 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e"} err="failed to get container status \"ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e\": rpc error: code = NotFound desc = could not find container \"ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e\": container with ID starting with ec161df26e45edec4e1db77ede6488d8dc2393e47a70967ea29ae4253d89d55e not found: ID does not exist" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.073336 4760 scope.go:117] "RemoveContainer" containerID="4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.074859 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759"} err="failed to get container status \"4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759\": rpc error: code = NotFound desc = could not find container \"4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759\": container with ID starting with 4ded93eaf15db89da4b2639e591b3976a2d8ed486fe7963425aba6873337b759 not found: ID does not exist" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.076141 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" (UID: "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.076931 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-scripts" (OuterVolumeSpecName: "scripts") pod "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" (UID: "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.103289 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" (UID: "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.123812 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data" (OuterVolumeSpecName: "config-data") pod "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" (UID: "0f4d3219-73ff-4b38-8d0c-00f17dfe41ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.163339 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.163670 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.163682 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.163693 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.163702 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7mbc\" (UniqueName: \"kubernetes.io/projected/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-kube-api-access-q7mbc\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.163713 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.163723 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.364895 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.372602 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.402431 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:21:56 crc kubenswrapper[4760]: E0123 18:21:56.403571 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" containerName="cinder-api-log" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.403597 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" containerName="cinder-api-log" Jan 23 18:21:56 crc kubenswrapper[4760]: E0123 18:21:56.403625 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" containerName="cinder-api" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.403635 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" containerName="cinder-api" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.403851 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" containerName="cinder-api" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.403883 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" containerName="cinder-api-log" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.405587 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.409524 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.409703 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.411573 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.418611 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.476502 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.476780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-config-data-custom\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.476903 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.476969 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f611f27e-a46b-40f8-ad28-a32d1dfa1149-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.477045 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.477261 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-scripts\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.477329 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-config-data\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.477433 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8pc6\" (UniqueName: \"kubernetes.io/projected/f611f27e-a46b-40f8-ad28-a32d1dfa1149-kube-api-access-t8pc6\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.477530 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f611f27e-a46b-40f8-ad28-a32d1dfa1149-logs\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.578563 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f611f27e-a46b-40f8-ad28-a32d1dfa1149-logs\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.578634 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.578679 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-config-data-custom\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.578695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.578716 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f611f27e-a46b-40f8-ad28-a32d1dfa1149-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.578744 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.578814 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-scripts\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.578810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f611f27e-a46b-40f8-ad28-a32d1dfa1149-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.579374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-config-data\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.579424 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8pc6\" (UniqueName: \"kubernetes.io/projected/f611f27e-a46b-40f8-ad28-a32d1dfa1149-kube-api-access-t8pc6\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.579490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f611f27e-a46b-40f8-ad28-a32d1dfa1149-logs\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.583034 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-scripts\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.583293 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.584003 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.584231 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.584279 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-config-data\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.585040 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f611f27e-a46b-40f8-ad28-a32d1dfa1149-config-data-custom\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.601490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8pc6\" (UniqueName: \"kubernetes.io/projected/f611f27e-a46b-40f8-ad28-a32d1dfa1149-kube-api-access-t8pc6\") pod \"cinder-api-0\" (UID: \"f611f27e-a46b-40f8-ad28-a32d1dfa1149\") " pod="openstack/cinder-api-0" Jan 23 18:21:56 crc kubenswrapper[4760]: I0123 18:21:56.744447 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:21:57 crc kubenswrapper[4760]: I0123 18:21:57.050088 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-664c7d54bb-bxtwt" event={"ID":"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986","Type":"ContainerStarted","Data":"78da790b147b4aa5ff3449f9ac96391ee662be2c21d7787d9fb38c39ab9b5d95"} Jan 23 18:21:57 crc kubenswrapper[4760]: I0123 18:21:57.050514 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:57 crc kubenswrapper[4760]: I0123 18:21:57.050532 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-664c7d54bb-bxtwt" event={"ID":"78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986","Type":"ContainerStarted","Data":"e05b07523daf4f473213703729ed123c37d6ab65d2b9fa88d8d700f2e029dec6"} Jan 23 18:21:57 crc kubenswrapper[4760]: I0123 18:21:57.050545 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:21:57 crc kubenswrapper[4760]: I0123 18:21:57.084520 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-664c7d54bb-bxtwt" podStartSLOduration=3.084492589 podStartE2EDuration="3.084492589s" podCreationTimestamp="2026-01-23 18:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:57.078391747 +0000 UTC m=+1260.080849690" watchObservedRunningTime="2026-01-23 18:21:57.084492589 +0000 UTC m=+1260.086950562" Jan 23 18:21:57 crc kubenswrapper[4760]: I0123 18:21:57.202592 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:21:57 crc kubenswrapper[4760]: I0123 18:21:57.626290 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4d3219-73ff-4b38-8d0c-00f17dfe41ea" path="/var/lib/kubelet/pods/0f4d3219-73ff-4b38-8d0c-00f17dfe41ea/volumes" Jan 23 18:21:58 crc kubenswrapper[4760]: I0123 18:21:58.061451 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f611f27e-a46b-40f8-ad28-a32d1dfa1149","Type":"ContainerStarted","Data":"f3d6c4bffdf9fa35695192d2c9a6b3ff59421c2556864990c845ae46967b073d"} Jan 23 18:21:58 crc kubenswrapper[4760]: I0123 18:21:58.062059 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f611f27e-a46b-40f8-ad28-a32d1dfa1149","Type":"ContainerStarted","Data":"578d9431161e7f6016e67d0c940734020c7fb84eed4aec811dadbbb66a3a6ec0"} Jan 23 18:21:59 crc kubenswrapper[4760]: I0123 18:21:59.073558 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f611f27e-a46b-40f8-ad28-a32d1dfa1149","Type":"ContainerStarted","Data":"819aa687ae118c3ddc725e5e2f749865e9040fd128a2dadbb7119eecf6c8f10f"} Jan 23 18:21:59 crc kubenswrapper[4760]: I0123 18:21:59.074436 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 18:21:59 crc kubenswrapper[4760]: I0123 18:21:59.099687 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.099668705 podStartE2EDuration="3.099668705s" podCreationTimestamp="2026-01-23 18:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:59.093551662 +0000 UTC m=+1262.096009605" watchObservedRunningTime="2026-01-23 18:21:59.099668705 +0000 UTC m=+1262.102126638" Jan 23 18:22:00 crc kubenswrapper[4760]: I0123 18:22:00.500258 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:22:00 crc kubenswrapper[4760]: I0123 18:22:00.524024 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:22:00 crc kubenswrapper[4760]: I0123 18:22:00.535641 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:22:00 crc kubenswrapper[4760]: I0123 18:22:00.634427 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-kvmhk"] Jan 23 18:22:00 crc kubenswrapper[4760]: I0123 18:22:00.634672 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" podUID="d932e4f8-84d4-45d4-bd22-24f2210215e3" containerName="dnsmasq-dns" containerID="cri-o://3dd42838a2862f6aa54c34e3eea44369491ea55b7a277ff28bc0ad57f3f1421e" gracePeriod=10 Jan 23 18:22:00 crc kubenswrapper[4760]: I0123 18:22:00.780546 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 18:22:00 crc kubenswrapper[4760]: I0123 18:22:00.866055 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.105911 4760 generic.go:334] "Generic (PLEG): container finished" podID="d932e4f8-84d4-45d4-bd22-24f2210215e3" containerID="3dd42838a2862f6aa54c34e3eea44369491ea55b7a277ff28bc0ad57f3f1421e" exitCode=0 Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.106113 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" containerName="cinder-scheduler" containerID="cri-o://d048d655f0082c7e5bec8e1bb0f2239b7014f363e9bfdc1d271ebe661d93dc78" gracePeriod=30 Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.106586 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" event={"ID":"d932e4f8-84d4-45d4-bd22-24f2210215e3","Type":"ContainerDied","Data":"3dd42838a2862f6aa54c34e3eea44369491ea55b7a277ff28bc0ad57f3f1421e"} Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.106697 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" containerName="probe" containerID="cri-o://15b45f4bf0c16f7e96c784adb5fcfd6a61da3c6e667f15ed0ec7e20d287c5a25" gracePeriod=30 Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.262731 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.268459 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.396157 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-config\") pod \"d932e4f8-84d4-45d4-bd22-24f2210215e3\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.396501 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-dns-svc\") pod \"d932e4f8-84d4-45d4-bd22-24f2210215e3\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.396719 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-nb\") pod \"d932e4f8-84d4-45d4-bd22-24f2210215e3\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.396866 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-sb\") pod \"d932e4f8-84d4-45d4-bd22-24f2210215e3\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.396969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s292\" (UniqueName: \"kubernetes.io/projected/d932e4f8-84d4-45d4-bd22-24f2210215e3-kube-api-access-2s292\") pod \"d932e4f8-84d4-45d4-bd22-24f2210215e3\" (UID: \"d932e4f8-84d4-45d4-bd22-24f2210215e3\") " Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.406384 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d932e4f8-84d4-45d4-bd22-24f2210215e3-kube-api-access-2s292" (OuterVolumeSpecName: "kube-api-access-2s292") pod "d932e4f8-84d4-45d4-bd22-24f2210215e3" (UID: "d932e4f8-84d4-45d4-bd22-24f2210215e3"). InnerVolumeSpecName "kube-api-access-2s292". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.473163 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d932e4f8-84d4-45d4-bd22-24f2210215e3" (UID: "d932e4f8-84d4-45d4-bd22-24f2210215e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.496137 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66645c546d-bcr2r" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.499634 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.499825 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s292\" (UniqueName: \"kubernetes.io/projected/d932e4f8-84d4-45d4-bd22-24f2210215e3-kube-api-access-2s292\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.501932 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d932e4f8-84d4-45d4-bd22-24f2210215e3" (UID: "d932e4f8-84d4-45d4-bd22-24f2210215e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.509575 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-config" (OuterVolumeSpecName: "config") pod "d932e4f8-84d4-45d4-bd22-24f2210215e3" (UID: "d932e4f8-84d4-45d4-bd22-24f2210215e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.548880 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d932e4f8-84d4-45d4-bd22-24f2210215e3" (UID: "d932e4f8-84d4-45d4-bd22-24f2210215e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.602041 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.602088 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:01 crc kubenswrapper[4760]: I0123 18:22:01.602097 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d932e4f8-84d4-45d4-bd22-24f2210215e3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:02 crc kubenswrapper[4760]: I0123 18:22:02.115157 4760 generic.go:334] "Generic (PLEG): container finished" podID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" containerID="15b45f4bf0c16f7e96c784adb5fcfd6a61da3c6e667f15ed0ec7e20d287c5a25" exitCode=0 Jan 23 18:22:02 crc kubenswrapper[4760]: I0123 18:22:02.115217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e121355-eb81-4ec0-8ed2-86f5a33bb400","Type":"ContainerDied","Data":"15b45f4bf0c16f7e96c784adb5fcfd6a61da3c6e667f15ed0ec7e20d287c5a25"} Jan 23 18:22:02 crc kubenswrapper[4760]: I0123 18:22:02.117182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" event={"ID":"d932e4f8-84d4-45d4-bd22-24f2210215e3","Type":"ContainerDied","Data":"94a4c8a80bbc67843a17d0d9709ee6594f0ca9d47d76a36086677a685906a5ec"} Jan 23 18:22:02 crc kubenswrapper[4760]: I0123 18:22:02.117222 4760 scope.go:117] "RemoveContainer" containerID="3dd42838a2862f6aa54c34e3eea44369491ea55b7a277ff28bc0ad57f3f1421e" Jan 23 18:22:02 crc kubenswrapper[4760]: I0123 18:22:02.117237 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-kvmhk" Jan 23 18:22:02 crc kubenswrapper[4760]: I0123 18:22:02.148860 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-kvmhk"] Jan 23 18:22:02 crc kubenswrapper[4760]: I0123 18:22:02.155396 4760 scope.go:117] "RemoveContainer" containerID="c286792d35f039d36a1713772d812c3b4a0504fbc06604a7b2009ec9155462de" Jan 23 18:22:02 crc kubenswrapper[4760]: I0123 18:22:02.158127 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-kvmhk"] Jan 23 18:22:03 crc kubenswrapper[4760]: I0123 18:22:03.625348 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d932e4f8-84d4-45d4-bd22-24f2210215e3" path="/var/lib/kubelet/pods/d932e4f8-84d4-45d4-bd22-24f2210215e3/volumes" Jan 23 18:22:04 crc kubenswrapper[4760]: I0123 18:22:04.328981 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:22:04 crc kubenswrapper[4760]: I0123 18:22:04.848828 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-74944c68c4-mnfbr" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.152977 4760 generic.go:334] "Generic (PLEG): container finished" podID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" containerID="d048d655f0082c7e5bec8e1bb0f2239b7014f363e9bfdc1d271ebe661d93dc78" exitCode=0 Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.153026 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e121355-eb81-4ec0-8ed2-86f5a33bb400","Type":"ContainerDied","Data":"d048d655f0082c7e5bec8e1bb0f2239b7014f363e9bfdc1d271ebe661d93dc78"} Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.492781 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 18:22:05 crc kubenswrapper[4760]: E0123 18:22:05.493284 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d932e4f8-84d4-45d4-bd22-24f2210215e3" containerName="dnsmasq-dns" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.493301 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d932e4f8-84d4-45d4-bd22-24f2210215e3" containerName="dnsmasq-dns" Jan 23 18:22:05 crc kubenswrapper[4760]: E0123 18:22:05.493327 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d932e4f8-84d4-45d4-bd22-24f2210215e3" containerName="init" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.493335 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d932e4f8-84d4-45d4-bd22-24f2210215e3" containerName="init" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.493547 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d932e4f8-84d4-45d4-bd22-24f2210215e3" containerName="dnsmasq-dns" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.494349 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.496741 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.497265 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-sf5c4" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.498354 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.518390 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.546466 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.679366 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-scripts\") pod \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.679532 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data-custom\") pod \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.679570 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-combined-ca-bundle\") pod \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.679685 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e121355-eb81-4ec0-8ed2-86f5a33bb400-etc-machine-id\") pod \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.679852 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data\") pod \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.679898 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w4f8\" (UniqueName: \"kubernetes.io/projected/5e121355-eb81-4ec0-8ed2-86f5a33bb400-kube-api-access-7w4f8\") pod \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\" (UID: \"5e121355-eb81-4ec0-8ed2-86f5a33bb400\") " Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.679947 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e121355-eb81-4ec0-8ed2-86f5a33bb400-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5e121355-eb81-4ec0-8ed2-86f5a33bb400" (UID: "5e121355-eb81-4ec0-8ed2-86f5a33bb400"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.680600 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b41e1f55-3448-4112-8aca-c5c2d6018310-openstack-config-secret\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.680640 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzz6f\" (UniqueName: \"kubernetes.io/projected/b41e1f55-3448-4112-8aca-c5c2d6018310-kube-api-access-mzz6f\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.680680 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41e1f55-3448-4112-8aca-c5c2d6018310-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.680739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b41e1f55-3448-4112-8aca-c5c2d6018310-openstack-config\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.680965 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e121355-eb81-4ec0-8ed2-86f5a33bb400-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.686554 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5e121355-eb81-4ec0-8ed2-86f5a33bb400" (UID: "5e121355-eb81-4ec0-8ed2-86f5a33bb400"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.692746 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-scripts" (OuterVolumeSpecName: "scripts") pod "5e121355-eb81-4ec0-8ed2-86f5a33bb400" (UID: "5e121355-eb81-4ec0-8ed2-86f5a33bb400"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.697933 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e121355-eb81-4ec0-8ed2-86f5a33bb400-kube-api-access-7w4f8" (OuterVolumeSpecName: "kube-api-access-7w4f8") pod "5e121355-eb81-4ec0-8ed2-86f5a33bb400" (UID: "5e121355-eb81-4ec0-8ed2-86f5a33bb400"). InnerVolumeSpecName "kube-api-access-7w4f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.740819 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e121355-eb81-4ec0-8ed2-86f5a33bb400" (UID: "5e121355-eb81-4ec0-8ed2-86f5a33bb400"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.782906 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b41e1f55-3448-4112-8aca-c5c2d6018310-openstack-config-secret\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.782954 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzz6f\" (UniqueName: \"kubernetes.io/projected/b41e1f55-3448-4112-8aca-c5c2d6018310-kube-api-access-mzz6f\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.782999 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41e1f55-3448-4112-8aca-c5c2d6018310-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.783058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b41e1f55-3448-4112-8aca-c5c2d6018310-openstack-config\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.783209 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w4f8\" (UniqueName: \"kubernetes.io/projected/5e121355-eb81-4ec0-8ed2-86f5a33bb400-kube-api-access-7w4f8\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.783233 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.783246 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.783259 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.784056 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b41e1f55-3448-4112-8aca-c5c2d6018310-openstack-config\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.796193 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b41e1f55-3448-4112-8aca-c5c2d6018310-openstack-config-secret\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.796188 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41e1f55-3448-4112-8aca-c5c2d6018310-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.799223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzz6f\" (UniqueName: \"kubernetes.io/projected/b41e1f55-3448-4112-8aca-c5c2d6018310-kube-api-access-mzz6f\") pod \"openstackclient\" (UID: \"b41e1f55-3448-4112-8aca-c5c2d6018310\") " pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.810041 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data" (OuterVolumeSpecName: "config-data") pod "5e121355-eb81-4ec0-8ed2-86f5a33bb400" (UID: "5e121355-eb81-4ec0-8ed2-86f5a33bb400"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.876892 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 18:22:05 crc kubenswrapper[4760]: I0123 18:22:05.885814 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e121355-eb81-4ec0-8ed2-86f5a33bb400-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.180679 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e121355-eb81-4ec0-8ed2-86f5a33bb400","Type":"ContainerDied","Data":"914588b45a88b316d90befc91560bd1a1a3ddb79041d745ff2faaede87b452f2"} Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.181159 4760 scope.go:117] "RemoveContainer" containerID="15b45f4bf0c16f7e96c784adb5fcfd6a61da3c6e667f15ed0ec7e20d287c5a25" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.181017 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.208840 4760 scope.go:117] "RemoveContainer" containerID="d048d655f0082c7e5bec8e1bb0f2239b7014f363e9bfdc1d271ebe661d93dc78" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.266458 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.280478 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.308041 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:22:06 crc kubenswrapper[4760]: E0123 18:22:06.308445 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" containerName="cinder-scheduler" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.308462 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" containerName="cinder-scheduler" Jan 23 18:22:06 crc kubenswrapper[4760]: E0123 18:22:06.308487 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" containerName="probe" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.308493 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" containerName="probe" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.308687 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" containerName="probe" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.308701 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" containerName="cinder-scheduler" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.309540 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.311973 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.336875 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.398425 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.398526 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3b1a53e-ed5e-43c1-aa57-e0e829359103-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.398559 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-scripts\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.398589 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.398626 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrzv\" (UniqueName: \"kubernetes.io/projected/b3b1a53e-ed5e-43c1-aa57-e0e829359103-kube-api-access-tcrzv\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.398658 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-config-data\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.499722 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3b1a53e-ed5e-43c1-aa57-e0e829359103-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.499777 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-scripts\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.499797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.499823 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcrzv\" (UniqueName: \"kubernetes.io/projected/b3b1a53e-ed5e-43c1-aa57-e0e829359103-kube-api-access-tcrzv\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.499842 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-config-data\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.499924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.500363 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3b1a53e-ed5e-43c1-aa57-e0e829359103-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.506895 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-scripts\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.507853 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.508872 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.513281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b1a53e-ed5e-43c1-aa57-e0e829359103-config-data\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.528865 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcrzv\" (UniqueName: \"kubernetes.io/projected/b3b1a53e-ed5e-43c1-aa57-e0e829359103-kube-api-access-tcrzv\") pod \"cinder-scheduler-0\" (UID: \"b3b1a53e-ed5e-43c1-aa57-e0e829359103\") " pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.565728 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.648158 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.806840 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-585856f577-q8bpp" Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.887352 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68d698bbd-sfnql"] Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.887586 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68d698bbd-sfnql" podUID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" containerName="neutron-api" containerID="cri-o://2c8c7eef1616f43f0f97bf1f7c2a50971cc464c1d8686eb0dbe7503df29aaee3" gracePeriod=30 Jan 23 18:22:06 crc kubenswrapper[4760]: I0123 18:22:06.887709 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68d698bbd-sfnql" podUID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" containerName="neutron-httpd" containerID="cri-o://296eb75f6869d405f821a570e52f26fa90f16b0254871df72af68b267051027b" gracePeriod=30 Jan 23 18:22:07 crc kubenswrapper[4760]: I0123 18:22:07.187186 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:22:07 crc kubenswrapper[4760]: I0123 18:22:07.195603 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b41e1f55-3448-4112-8aca-c5c2d6018310","Type":"ContainerStarted","Data":"3b3df81e03b673ad172d54b026feae0ad187f4d34ccead433ff5b45b45cb3df5"} Jan 23 18:22:07 crc kubenswrapper[4760]: I0123 18:22:07.198192 4760 generic.go:334] "Generic (PLEG): container finished" podID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" containerID="296eb75f6869d405f821a570e52f26fa90f16b0254871df72af68b267051027b" exitCode=0 Jan 23 18:22:07 crc kubenswrapper[4760]: I0123 18:22:07.198219 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d698bbd-sfnql" event={"ID":"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a","Type":"ContainerDied","Data":"296eb75f6869d405f821a570e52f26fa90f16b0254871df72af68b267051027b"} Jan 23 18:22:07 crc kubenswrapper[4760]: I0123 18:22:07.352774 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:22:07 crc kubenswrapper[4760]: I0123 18:22:07.520538 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-664c7d54bb-bxtwt" Jan 23 18:22:07 crc kubenswrapper[4760]: I0123 18:22:07.585613 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c779fd6b6-249z2"] Jan 23 18:22:07 crc kubenswrapper[4760]: I0123 18:22:07.585964 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c779fd6b6-249z2" podUID="05cc414f-5108-4436-9a14-354c8575b38e" containerName="barbican-api-log" containerID="cri-o://81409b94ce797909a15fdf8bf318d17ea854452fca65af6ba64333ccd2d4e289" gracePeriod=30 Jan 23 18:22:07 crc kubenswrapper[4760]: I0123 18:22:07.586345 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c779fd6b6-249z2" podUID="05cc414f-5108-4436-9a14-354c8575b38e" containerName="barbican-api" containerID="cri-o://547b06644752fbe2e2f150c484d7541b2d6d7f2360d2346b6a20b0a2cbd86b5a" gracePeriod=30 Jan 23 18:22:07 crc kubenswrapper[4760]: I0123 18:22:07.614748 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e121355-eb81-4ec0-8ed2-86f5a33bb400" path="/var/lib/kubelet/pods/5e121355-eb81-4ec0-8ed2-86f5a33bb400/volumes" Jan 23 18:22:08 crc kubenswrapper[4760]: I0123 18:22:08.235827 4760 generic.go:334] "Generic (PLEG): container finished" podID="05cc414f-5108-4436-9a14-354c8575b38e" containerID="81409b94ce797909a15fdf8bf318d17ea854452fca65af6ba64333ccd2d4e289" exitCode=143 Jan 23 18:22:08 crc kubenswrapper[4760]: I0123 18:22:08.236164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c779fd6b6-249z2" event={"ID":"05cc414f-5108-4436-9a14-354c8575b38e","Type":"ContainerDied","Data":"81409b94ce797909a15fdf8bf318d17ea854452fca65af6ba64333ccd2d4e289"} Jan 23 18:22:08 crc kubenswrapper[4760]: I0123 18:22:08.255801 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3b1a53e-ed5e-43c1-aa57-e0e829359103","Type":"ContainerStarted","Data":"c6b9fd218be9dc871cd47cb95492eff511867693bb3bde02c2c195fc97b24d7e"} Jan 23 18:22:08 crc kubenswrapper[4760]: I0123 18:22:08.255872 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3b1a53e-ed5e-43c1-aa57-e0e829359103","Type":"ContainerStarted","Data":"fd79c3e2a8e44660ca9806207c38acd8975f0573dc4ae992cf5bb5f5f77d6c88"} Jan 23 18:22:09 crc kubenswrapper[4760]: I0123 18:22:09.268787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3b1a53e-ed5e-43c1-aa57-e0e829359103","Type":"ContainerStarted","Data":"2f1063f09b56c4e3039d8be53f4efb50dcbb13ef29452a8121847d71178bc48f"} Jan 23 18:22:09 crc kubenswrapper[4760]: I0123 18:22:09.336454 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.336433723 podStartE2EDuration="3.336433723s" podCreationTimestamp="2026-01-23 18:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:22:09.33560623 +0000 UTC m=+1272.338064163" watchObservedRunningTime="2026-01-23 18:22:09.336433723 +0000 UTC m=+1272.338891656" Jan 23 18:22:09 crc kubenswrapper[4760]: I0123 18:22:09.647934 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.074006 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c779fd6b6-249z2" podUID="05cc414f-5108-4436-9a14-354c8575b38e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.151:9311/healthcheck\": read tcp 10.217.0.2:44682->10.217.0.151:9311: read: connection reset by peer" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.074579 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c779fd6b6-249z2" podUID="05cc414f-5108-4436-9a14-354c8575b38e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.151:9311/healthcheck\": read tcp 10.217.0.2:44670->10.217.0.151:9311: read: connection reset by peer" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.294867 4760 generic.go:334] "Generic (PLEG): container finished" podID="05cc414f-5108-4436-9a14-354c8575b38e" containerID="547b06644752fbe2e2f150c484d7541b2d6d7f2360d2346b6a20b0a2cbd86b5a" exitCode=0 Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.294941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c779fd6b6-249z2" event={"ID":"05cc414f-5108-4436-9a14-354c8575b38e","Type":"ContainerDied","Data":"547b06644752fbe2e2f150c484d7541b2d6d7f2360d2346b6a20b0a2cbd86b5a"} Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.299156 4760 generic.go:334] "Generic (PLEG): container finished" podID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" containerID="2c8c7eef1616f43f0f97bf1f7c2a50971cc464c1d8686eb0dbe7503df29aaee3" exitCode=0 Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.299185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d698bbd-sfnql" event={"ID":"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a","Type":"ContainerDied","Data":"2c8c7eef1616f43f0f97bf1f7c2a50971cc464c1d8686eb0dbe7503df29aaee3"} Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.299207 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d698bbd-sfnql" event={"ID":"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a","Type":"ContainerDied","Data":"af3df36d4a6e3eefabd0a7aa63e9dd837b6d249ffde3902f87ef58e13078fe04"} Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.299240 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af3df36d4a6e3eefabd0a7aa63e9dd837b6d249ffde3902f87ef58e13078fe04" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.319945 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.412770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-ovndb-tls-certs\") pod \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.498656 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" (UID: "8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.514698 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-httpd-config\") pod \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.514811 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-config\") pod \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.514945 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-combined-ca-bundle\") pod \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.514978 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqfmx\" (UniqueName: \"kubernetes.io/projected/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-kube-api-access-zqfmx\") pod \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\" (UID: \"8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a\") " Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.515965 4760 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.517888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" (UID: "8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.520818 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-kube-api-access-zqfmx" (OuterVolumeSpecName: "kube-api-access-zqfmx") pod "8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" (UID: "8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a"). InnerVolumeSpecName "kube-api-access-zqfmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.521340 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.564441 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-config" (OuterVolumeSpecName: "config") pod "8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" (UID: "8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.581367 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" (UID: "8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.617203 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data\") pod \"05cc414f-5108-4436-9a14-354c8575b38e\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.617285 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05cc414f-5108-4436-9a14-354c8575b38e-logs\") pod \"05cc414f-5108-4436-9a14-354c8575b38e\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.617331 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpgxw\" (UniqueName: \"kubernetes.io/projected/05cc414f-5108-4436-9a14-354c8575b38e-kube-api-access-xpgxw\") pod \"05cc414f-5108-4436-9a14-354c8575b38e\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.617354 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-combined-ca-bundle\") pod \"05cc414f-5108-4436-9a14-354c8575b38e\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.617418 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data-custom\") pod \"05cc414f-5108-4436-9a14-354c8575b38e\" (UID: \"05cc414f-5108-4436-9a14-354c8575b38e\") " Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.617852 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.617874 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.617884 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.617894 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqfmx\" (UniqueName: \"kubernetes.io/projected/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a-kube-api-access-zqfmx\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.619232 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05cc414f-5108-4436-9a14-354c8575b38e-logs" (OuterVolumeSpecName: "logs") pod "05cc414f-5108-4436-9a14-354c8575b38e" (UID: "05cc414f-5108-4436-9a14-354c8575b38e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.620678 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "05cc414f-5108-4436-9a14-354c8575b38e" (UID: "05cc414f-5108-4436-9a14-354c8575b38e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.620812 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05cc414f-5108-4436-9a14-354c8575b38e-kube-api-access-xpgxw" (OuterVolumeSpecName: "kube-api-access-xpgxw") pod "05cc414f-5108-4436-9a14-354c8575b38e" (UID: "05cc414f-5108-4436-9a14-354c8575b38e"). InnerVolumeSpecName "kube-api-access-xpgxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.650905 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.657976 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05cc414f-5108-4436-9a14-354c8575b38e" (UID: "05cc414f-5108-4436-9a14-354c8575b38e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.667563 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data" (OuterVolumeSpecName: "config-data") pod "05cc414f-5108-4436-9a14-354c8575b38e" (UID: "05cc414f-5108-4436-9a14-354c8575b38e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.719502 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpgxw\" (UniqueName: \"kubernetes.io/projected/05cc414f-5108-4436-9a14-354c8575b38e-kube-api-access-xpgxw\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.719542 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.719555 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.719568 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cc414f-5108-4436-9a14-354c8575b38e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:11 crc kubenswrapper[4760]: I0123 18:22:11.719579 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05cc414f-5108-4436-9a14-354c8575b38e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:12 crc kubenswrapper[4760]: I0123 18:22:12.326584 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68d698bbd-sfnql" Jan 23 18:22:12 crc kubenswrapper[4760]: I0123 18:22:12.326624 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c779fd6b6-249z2" Jan 23 18:22:12 crc kubenswrapper[4760]: I0123 18:22:12.326647 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c779fd6b6-249z2" event={"ID":"05cc414f-5108-4436-9a14-354c8575b38e","Type":"ContainerDied","Data":"d185ebc185331be0e38dc1a53774d2756bbfb32880ccacc51b394e73bde2a865"} Jan 23 18:22:12 crc kubenswrapper[4760]: I0123 18:22:12.327077 4760 scope.go:117] "RemoveContainer" containerID="547b06644752fbe2e2f150c484d7541b2d6d7f2360d2346b6a20b0a2cbd86b5a" Jan 23 18:22:12 crc kubenswrapper[4760]: I0123 18:22:12.368200 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68d698bbd-sfnql"] Jan 23 18:22:12 crc kubenswrapper[4760]: I0123 18:22:12.377769 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-68d698bbd-sfnql"] Jan 23 18:22:12 crc kubenswrapper[4760]: I0123 18:22:12.386771 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c779fd6b6-249z2"] Jan 23 18:22:12 crc kubenswrapper[4760]: I0123 18:22:12.394179 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6c779fd6b6-249z2"] Jan 23 18:22:13 crc kubenswrapper[4760]: I0123 18:22:13.613659 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05cc414f-5108-4436-9a14-354c8575b38e" path="/var/lib/kubelet/pods/05cc414f-5108-4436-9a14-354c8575b38e/volumes" Jan 23 18:22:13 crc kubenswrapper[4760]: I0123 18:22:13.614666 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" path="/var/lib/kubelet/pods/8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a/volumes" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.627211 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7xpz8"] Jan 23 18:22:15 crc kubenswrapper[4760]: E0123 18:22:15.627854 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cc414f-5108-4436-9a14-354c8575b38e" containerName="barbican-api" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.627867 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cc414f-5108-4436-9a14-354c8575b38e" containerName="barbican-api" Jan 23 18:22:15 crc kubenswrapper[4760]: E0123 18:22:15.627890 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cc414f-5108-4436-9a14-354c8575b38e" containerName="barbican-api-log" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.627896 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cc414f-5108-4436-9a14-354c8575b38e" containerName="barbican-api-log" Jan 23 18:22:15 crc kubenswrapper[4760]: E0123 18:22:15.627914 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" containerName="neutron-httpd" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.627920 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" containerName="neutron-httpd" Jan 23 18:22:15 crc kubenswrapper[4760]: E0123 18:22:15.627947 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" containerName="neutron-api" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.627952 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" containerName="neutron-api" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.628097 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" containerName="neutron-httpd" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.628110 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="05cc414f-5108-4436-9a14-354c8575b38e" containerName="barbican-api-log" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.628118 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="05cc414f-5108-4436-9a14-354c8575b38e" containerName="barbican-api" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.628134 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9aa0fc-ef7a-49f6-b76a-aeb5b618a16a" containerName="neutron-api" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.628664 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7xpz8" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.660280 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7xpz8"] Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.745523 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xw828"] Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.746578 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xw828" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.765369 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xw828"] Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.785995 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996067c-6b4f-4cbf-a418-a77889a7a676-operator-scripts\") pod \"nova-api-db-create-7xpz8\" (UID: \"4996067c-6b4f-4cbf-a418-a77889a7a676\") " pod="openstack/nova-api-db-create-7xpz8" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.786205 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq6h8\" (UniqueName: \"kubernetes.io/projected/4996067c-6b4f-4cbf-a418-a77889a7a676-kube-api-access-tq6h8\") pod \"nova-api-db-create-7xpz8\" (UID: \"4996067c-6b4f-4cbf-a418-a77889a7a676\") " pod="openstack/nova-api-db-create-7xpz8" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.832734 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-c2q94"] Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.834058 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c2q94" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.842844 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-90e1-account-create-update-q66bq"] Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.844245 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-90e1-account-create-update-q66bq" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.846900 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.853675 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c2q94"] Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.864736 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-90e1-account-create-update-q66bq"] Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.888284 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zprkg\" (UniqueName: \"kubernetes.io/projected/b1169238-c8d2-41e2-889e-54f12b6e2b97-kube-api-access-zprkg\") pod \"nova-cell0-db-create-xw828\" (UID: \"b1169238-c8d2-41e2-889e-54f12b6e2b97\") " pod="openstack/nova-cell0-db-create-xw828" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.888396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq6h8\" (UniqueName: \"kubernetes.io/projected/4996067c-6b4f-4cbf-a418-a77889a7a676-kube-api-access-tq6h8\") pod \"nova-api-db-create-7xpz8\" (UID: \"4996067c-6b4f-4cbf-a418-a77889a7a676\") " pod="openstack/nova-api-db-create-7xpz8" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.888752 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1169238-c8d2-41e2-889e-54f12b6e2b97-operator-scripts\") pod \"nova-cell0-db-create-xw828\" (UID: \"b1169238-c8d2-41e2-889e-54f12b6e2b97\") " pod="openstack/nova-cell0-db-create-xw828" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.888883 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996067c-6b4f-4cbf-a418-a77889a7a676-operator-scripts\") pod \"nova-api-db-create-7xpz8\" (UID: \"4996067c-6b4f-4cbf-a418-a77889a7a676\") " pod="openstack/nova-api-db-create-7xpz8" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.890156 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996067c-6b4f-4cbf-a418-a77889a7a676-operator-scripts\") pod \"nova-api-db-create-7xpz8\" (UID: \"4996067c-6b4f-4cbf-a418-a77889a7a676\") " pod="openstack/nova-api-db-create-7xpz8" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.909321 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq6h8\" (UniqueName: \"kubernetes.io/projected/4996067c-6b4f-4cbf-a418-a77889a7a676-kube-api-access-tq6h8\") pod \"nova-api-db-create-7xpz8\" (UID: \"4996067c-6b4f-4cbf-a418-a77889a7a676\") " pod="openstack/nova-api-db-create-7xpz8" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.987672 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7xpz8" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.990110 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1169238-c8d2-41e2-889e-54f12b6e2b97-operator-scripts\") pod \"nova-cell0-db-create-xw828\" (UID: \"b1169238-c8d2-41e2-889e-54f12b6e2b97\") " pod="openstack/nova-cell0-db-create-xw828" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.990168 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jthjj\" (UniqueName: \"kubernetes.io/projected/d842cdd0-8594-4655-9b54-81dfb7855f67-kube-api-access-jthjj\") pod \"nova-api-90e1-account-create-update-q66bq\" (UID: \"d842cdd0-8594-4655-9b54-81dfb7855f67\") " pod="openstack/nova-api-90e1-account-create-update-q66bq" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.990190 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24798eb5-c0de-4eec-a09e-d3bb7409e529-operator-scripts\") pod \"nova-cell1-db-create-c2q94\" (UID: \"24798eb5-c0de-4eec-a09e-d3bb7409e529\") " pod="openstack/nova-cell1-db-create-c2q94" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.990217 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2w6f\" (UniqueName: \"kubernetes.io/projected/24798eb5-c0de-4eec-a09e-d3bb7409e529-kube-api-access-c2w6f\") pod \"nova-cell1-db-create-c2q94\" (UID: \"24798eb5-c0de-4eec-a09e-d3bb7409e529\") " pod="openstack/nova-cell1-db-create-c2q94" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.990249 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d842cdd0-8594-4655-9b54-81dfb7855f67-operator-scripts\") pod \"nova-api-90e1-account-create-update-q66bq\" (UID: \"d842cdd0-8594-4655-9b54-81dfb7855f67\") " pod="openstack/nova-api-90e1-account-create-update-q66bq" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.990291 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zprkg\" (UniqueName: \"kubernetes.io/projected/b1169238-c8d2-41e2-889e-54f12b6e2b97-kube-api-access-zprkg\") pod \"nova-cell0-db-create-xw828\" (UID: \"b1169238-c8d2-41e2-889e-54f12b6e2b97\") " pod="openstack/nova-cell0-db-create-xw828" Jan 23 18:22:15 crc kubenswrapper[4760]: I0123 18:22:15.991273 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1169238-c8d2-41e2-889e-54f12b6e2b97-operator-scripts\") pod \"nova-cell0-db-create-xw828\" (UID: \"b1169238-c8d2-41e2-889e-54f12b6e2b97\") " pod="openstack/nova-cell0-db-create-xw828" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.032145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zprkg\" (UniqueName: \"kubernetes.io/projected/b1169238-c8d2-41e2-889e-54f12b6e2b97-kube-api-access-zprkg\") pod \"nova-cell0-db-create-xw828\" (UID: \"b1169238-c8d2-41e2-889e-54f12b6e2b97\") " pod="openstack/nova-cell0-db-create-xw828" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.047205 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9fa7-account-create-update-2x9wm"] Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.048337 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.051295 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.062830 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9fa7-account-create-update-2x9wm"] Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.063428 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xw828" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.093248 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jthjj\" (UniqueName: \"kubernetes.io/projected/d842cdd0-8594-4655-9b54-81dfb7855f67-kube-api-access-jthjj\") pod \"nova-api-90e1-account-create-update-q66bq\" (UID: \"d842cdd0-8594-4655-9b54-81dfb7855f67\") " pod="openstack/nova-api-90e1-account-create-update-q66bq" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.094733 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24798eb5-c0de-4eec-a09e-d3bb7409e529-operator-scripts\") pod \"nova-cell1-db-create-c2q94\" (UID: \"24798eb5-c0de-4eec-a09e-d3bb7409e529\") " pod="openstack/nova-cell1-db-create-c2q94" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.094923 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2w6f\" (UniqueName: \"kubernetes.io/projected/24798eb5-c0de-4eec-a09e-d3bb7409e529-kube-api-access-c2w6f\") pod \"nova-cell1-db-create-c2q94\" (UID: \"24798eb5-c0de-4eec-a09e-d3bb7409e529\") " pod="openstack/nova-cell1-db-create-c2q94" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.095087 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d842cdd0-8594-4655-9b54-81dfb7855f67-operator-scripts\") pod \"nova-api-90e1-account-create-update-q66bq\" (UID: \"d842cdd0-8594-4655-9b54-81dfb7855f67\") " pod="openstack/nova-api-90e1-account-create-update-q66bq" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.096792 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24798eb5-c0de-4eec-a09e-d3bb7409e529-operator-scripts\") pod \"nova-cell1-db-create-c2q94\" (UID: \"24798eb5-c0de-4eec-a09e-d3bb7409e529\") " pod="openstack/nova-cell1-db-create-c2q94" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.096977 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d842cdd0-8594-4655-9b54-81dfb7855f67-operator-scripts\") pod \"nova-api-90e1-account-create-update-q66bq\" (UID: \"d842cdd0-8594-4655-9b54-81dfb7855f67\") " pod="openstack/nova-api-90e1-account-create-update-q66bq" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.115933 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jthjj\" (UniqueName: \"kubernetes.io/projected/d842cdd0-8594-4655-9b54-81dfb7855f67-kube-api-access-jthjj\") pod \"nova-api-90e1-account-create-update-q66bq\" (UID: \"d842cdd0-8594-4655-9b54-81dfb7855f67\") " pod="openstack/nova-api-90e1-account-create-update-q66bq" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.116622 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2w6f\" (UniqueName: \"kubernetes.io/projected/24798eb5-c0de-4eec-a09e-d3bb7409e529-kube-api-access-c2w6f\") pod \"nova-cell1-db-create-c2q94\" (UID: \"24798eb5-c0de-4eec-a09e-d3bb7409e529\") " pod="openstack/nova-cell1-db-create-c2q94" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.149119 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c2q94" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.162850 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-90e1-account-create-update-q66bq" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.171263 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.196754 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqwx\" (UniqueName: \"kubernetes.io/projected/d67914fd-9d8f-4d74-8bf3-51550c292f95-kube-api-access-zpqwx\") pod \"nova-cell0-9fa7-account-create-update-2x9wm\" (UID: \"d67914fd-9d8f-4d74-8bf3-51550c292f95\") " pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.196986 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d67914fd-9d8f-4d74-8bf3-51550c292f95-operator-scripts\") pod \"nova-cell0-9fa7-account-create-update-2x9wm\" (UID: \"d67914fd-9d8f-4d74-8bf3-51550c292f95\") " pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.242037 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-414c-account-create-update-c6x2x"] Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.243292 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-414c-account-create-update-c6x2x" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.250768 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.257266 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-414c-account-create-update-c6x2x"] Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.298090 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d67914fd-9d8f-4d74-8bf3-51550c292f95-operator-scripts\") pod \"nova-cell0-9fa7-account-create-update-2x9wm\" (UID: \"d67914fd-9d8f-4d74-8bf3-51550c292f95\") " pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.298163 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqwx\" (UniqueName: \"kubernetes.io/projected/d67914fd-9d8f-4d74-8bf3-51550c292f95-kube-api-access-zpqwx\") pod \"nova-cell0-9fa7-account-create-update-2x9wm\" (UID: \"d67914fd-9d8f-4d74-8bf3-51550c292f95\") " pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.298775 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d67914fd-9d8f-4d74-8bf3-51550c292f95-operator-scripts\") pod \"nova-cell0-9fa7-account-create-update-2x9wm\" (UID: \"d67914fd-9d8f-4d74-8bf3-51550c292f95\") " pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.343929 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqwx\" (UniqueName: \"kubernetes.io/projected/d67914fd-9d8f-4d74-8bf3-51550c292f95-kube-api-access-zpqwx\") pod \"nova-cell0-9fa7-account-create-update-2x9wm\" (UID: \"d67914fd-9d8f-4d74-8bf3-51550c292f95\") " pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.386788 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.399491 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpjrv\" (UniqueName: \"kubernetes.io/projected/479e422c-5c59-4786-9b8e-e237f521fdaf-kube-api-access-wpjrv\") pod \"nova-cell1-414c-account-create-update-c6x2x\" (UID: \"479e422c-5c59-4786-9b8e-e237f521fdaf\") " pod="openstack/nova-cell1-414c-account-create-update-c6x2x" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.399573 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479e422c-5c59-4786-9b8e-e237f521fdaf-operator-scripts\") pod \"nova-cell1-414c-account-create-update-c6x2x\" (UID: \"479e422c-5c59-4786-9b8e-e237f521fdaf\") " pod="openstack/nova-cell1-414c-account-create-update-c6x2x" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.500704 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479e422c-5c59-4786-9b8e-e237f521fdaf-operator-scripts\") pod \"nova-cell1-414c-account-create-update-c6x2x\" (UID: \"479e422c-5c59-4786-9b8e-e237f521fdaf\") " pod="openstack/nova-cell1-414c-account-create-update-c6x2x" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.500850 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpjrv\" (UniqueName: \"kubernetes.io/projected/479e422c-5c59-4786-9b8e-e237f521fdaf-kube-api-access-wpjrv\") pod \"nova-cell1-414c-account-create-update-c6x2x\" (UID: \"479e422c-5c59-4786-9b8e-e237f521fdaf\") " pod="openstack/nova-cell1-414c-account-create-update-c6x2x" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.501671 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479e422c-5c59-4786-9b8e-e237f521fdaf-operator-scripts\") pod \"nova-cell1-414c-account-create-update-c6x2x\" (UID: \"479e422c-5c59-4786-9b8e-e237f521fdaf\") " pod="openstack/nova-cell1-414c-account-create-update-c6x2x" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.524110 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpjrv\" (UniqueName: \"kubernetes.io/projected/479e422c-5c59-4786-9b8e-e237f521fdaf-kube-api-access-wpjrv\") pod \"nova-cell1-414c-account-create-update-c6x2x\" (UID: \"479e422c-5c59-4786-9b8e-e237f521fdaf\") " pod="openstack/nova-cell1-414c-account-create-update-c6x2x" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.575915 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-414c-account-create-update-c6x2x" Jan 23 18:22:16 crc kubenswrapper[4760]: I0123 18:22:16.864717 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 18:22:17 crc kubenswrapper[4760]: I0123 18:22:17.768077 4760 scope.go:117] "RemoveContainer" containerID="81409b94ce797909a15fdf8bf318d17ea854452fca65af6ba64333ccd2d4e289" Jan 23 18:22:18 crc kubenswrapper[4760]: I0123 18:22:18.292386 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c2q94"] Jan 23 18:22:18 crc kubenswrapper[4760]: I0123 18:22:18.336776 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9fa7-account-create-update-2x9wm"] Jan 23 18:22:18 crc kubenswrapper[4760]: I0123 18:22:18.405248 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" event={"ID":"d67914fd-9d8f-4d74-8bf3-51550c292f95","Type":"ContainerStarted","Data":"966d33bc39667af9a97e19e2216b1e3031d2044fd752701489644dc7dfcdd4c0"} Jan 23 18:22:18 crc kubenswrapper[4760]: I0123 18:22:18.406113 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c2q94" event={"ID":"24798eb5-c0de-4eec-a09e-d3bb7409e529","Type":"ContainerStarted","Data":"dd5a584fde5c08cb792b6a2d72f88da006d8455772a2b5491a300cc702d5e135"} Jan 23 18:22:18 crc kubenswrapper[4760]: I0123 18:22:18.409840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b41e1f55-3448-4112-8aca-c5c2d6018310","Type":"ContainerStarted","Data":"0319b84c071b9ce1aa1fb67fff98137f6328516ec7cb2ce0d683657f8b35a73f"} Jan 23 18:22:18 crc kubenswrapper[4760]: I0123 18:22:18.433081 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.121126455 podStartE2EDuration="13.433060261s" podCreationTimestamp="2026-01-23 18:22:05 +0000 UTC" firstStartedPulling="2026-01-23 18:22:06.582402149 +0000 UTC m=+1269.584860082" lastFinishedPulling="2026-01-23 18:22:17.894335955 +0000 UTC m=+1280.896793888" observedRunningTime="2026-01-23 18:22:18.430772589 +0000 UTC m=+1281.433230512" watchObservedRunningTime="2026-01-23 18:22:18.433060261 +0000 UTC m=+1281.435518194" Jan 23 18:22:18 crc kubenswrapper[4760]: I0123 18:22:18.618113 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xw828"] Jan 23 18:22:18 crc kubenswrapper[4760]: I0123 18:22:18.632958 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7xpz8"] Jan 23 18:22:18 crc kubenswrapper[4760]: I0123 18:22:18.657951 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-90e1-account-create-update-q66bq"] Jan 23 18:22:18 crc kubenswrapper[4760]: W0123 18:22:18.684272 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd842cdd0_8594_4655_9b54_81dfb7855f67.slice/crio-29917cb61bd8cf97f013223436c25787d9fc499d8ad13a71ab72d7c420a89bdf WatchSource:0}: Error finding container 29917cb61bd8cf97f013223436c25787d9fc499d8ad13a71ab72d7c420a89bdf: Status 404 returned error can't find the container with id 29917cb61bd8cf97f013223436c25787d9fc499d8ad13a71ab72d7c420a89bdf Jan 23 18:22:18 crc kubenswrapper[4760]: I0123 18:22:18.789282 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-414c-account-create-update-c6x2x"] Jan 23 18:22:18 crc kubenswrapper[4760]: W0123 18:22:18.827168 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod479e422c_5c59_4786_9b8e_e237f521fdaf.slice/crio-81a1a7375fbed5068855d928fa6d8e3a576b9851dd9884b0f750de13b235a373 WatchSource:0}: Error finding container 81a1a7375fbed5068855d928fa6d8e3a576b9851dd9884b0f750de13b235a373: Status 404 returned error can't find the container with id 81a1a7375fbed5068855d928fa6d8e3a576b9851dd9884b0f750de13b235a373 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.100784 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.101025 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="ceilometer-central-agent" containerID="cri-o://cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a" gracePeriod=30 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.101130 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="ceilometer-notification-agent" containerID="cri-o://015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638" gracePeriod=30 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.101117 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="sg-core" containerID="cri-o://70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d" gracePeriod=30 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.101372 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="proxy-httpd" containerID="cri-o://96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743" gracePeriod=30 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.436547 4760 generic.go:334] "Generic (PLEG): container finished" podID="d67914fd-9d8f-4d74-8bf3-51550c292f95" containerID="9c6a2c32084165c02fee60a9a34d8423c526a0a9746ea178c3266a55ba67677f" exitCode=0 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.437741 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" event={"ID":"d67914fd-9d8f-4d74-8bf3-51550c292f95","Type":"ContainerDied","Data":"9c6a2c32084165c02fee60a9a34d8423c526a0a9746ea178c3266a55ba67677f"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.439216 4760 generic.go:334] "Generic (PLEG): container finished" podID="4996067c-6b4f-4cbf-a418-a77889a7a676" containerID="3d9499f3274fd120b8db2e406f42ae3bac48a2b25f53bef74fa3938efab4ee2e" exitCode=0 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.439323 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7xpz8" event={"ID":"4996067c-6b4f-4cbf-a418-a77889a7a676","Type":"ContainerDied","Data":"3d9499f3274fd120b8db2e406f42ae3bac48a2b25f53bef74fa3938efab4ee2e"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.439433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7xpz8" event={"ID":"4996067c-6b4f-4cbf-a418-a77889a7a676","Type":"ContainerStarted","Data":"244dbc4fe82e0650baaa0eee6b1e369530acbadd1498b35401398c8ec59404da"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.440398 4760 generic.go:334] "Generic (PLEG): container finished" podID="24798eb5-c0de-4eec-a09e-d3bb7409e529" containerID="904faf4b83a2d4b858b54d20ea9b8d9d54ce1b23c25918373e8e40a9ff50b0e4" exitCode=0 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.440494 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c2q94" event={"ID":"24798eb5-c0de-4eec-a09e-d3bb7409e529","Type":"ContainerDied","Data":"904faf4b83a2d4b858b54d20ea9b8d9d54ce1b23c25918373e8e40a9ff50b0e4"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.448323 4760 generic.go:334] "Generic (PLEG): container finished" podID="b1169238-c8d2-41e2-889e-54f12b6e2b97" containerID="bba34d5bf51831bcb252fbbb48f54e6c3f1ff4f5e4ab386aa2ea9b2b93cb2116" exitCode=0 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.448379 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xw828" event={"ID":"b1169238-c8d2-41e2-889e-54f12b6e2b97","Type":"ContainerDied","Data":"bba34d5bf51831bcb252fbbb48f54e6c3f1ff4f5e4ab386aa2ea9b2b93cb2116"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.448418 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xw828" event={"ID":"b1169238-c8d2-41e2-889e-54f12b6e2b97","Type":"ContainerStarted","Data":"c9805a6db29510f62d9811c7e0ce2eebfafe0d6d04881545f02d29c08c884fc9"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.450009 4760 generic.go:334] "Generic (PLEG): container finished" podID="d842cdd0-8594-4655-9b54-81dfb7855f67" containerID="6d08accc95c1a2e943c9d213946233de530d9ded545e980d191fe17b1118985a" exitCode=0 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.450046 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-90e1-account-create-update-q66bq" event={"ID":"d842cdd0-8594-4655-9b54-81dfb7855f67","Type":"ContainerDied","Data":"6d08accc95c1a2e943c9d213946233de530d9ded545e980d191fe17b1118985a"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.450067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-90e1-account-create-update-q66bq" event={"ID":"d842cdd0-8594-4655-9b54-81dfb7855f67","Type":"ContainerStarted","Data":"29917cb61bd8cf97f013223436c25787d9fc499d8ad13a71ab72d7c420a89bdf"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.452558 4760 generic.go:334] "Generic (PLEG): container finished" podID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerID="96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743" exitCode=0 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.452573 4760 generic.go:334] "Generic (PLEG): container finished" podID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerID="70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d" exitCode=2 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.452589 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54db138a-d54f-4224-a4eb-00fc0f39ed3c","Type":"ContainerDied","Data":"96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.452632 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54db138a-d54f-4224-a4eb-00fc0f39ed3c","Type":"ContainerDied","Data":"70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.454012 4760 generic.go:334] "Generic (PLEG): container finished" podID="479e422c-5c59-4786-9b8e-e237f521fdaf" containerID="14595165d162374308e3bb926f2ae8a113ae0f96083a8d285d2bc9eeb884f0cf" exitCode=0 Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.454109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-414c-account-create-update-c6x2x" event={"ID":"479e422c-5c59-4786-9b8e-e237f521fdaf","Type":"ContainerDied","Data":"14595165d162374308e3bb926f2ae8a113ae0f96083a8d285d2bc9eeb884f0cf"} Jan 23 18:22:19 crc kubenswrapper[4760]: I0123 18:22:19.454133 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-414c-account-create-update-c6x2x" event={"ID":"479e422c-5c59-4786-9b8e-e237f521fdaf","Type":"ContainerStarted","Data":"81a1a7375fbed5068855d928fa6d8e3a576b9851dd9884b0f750de13b235a373"} Jan 23 18:22:20 crc kubenswrapper[4760]: I0123 18:22:20.481440 4760 generic.go:334] "Generic (PLEG): container finished" podID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerID="cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a" exitCode=0 Jan 23 18:22:20 crc kubenswrapper[4760]: I0123 18:22:20.481803 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54db138a-d54f-4224-a4eb-00fc0f39ed3c","Type":"ContainerDied","Data":"cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a"} Jan 23 18:22:20 crc kubenswrapper[4760]: I0123 18:22:20.897956 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" Jan 23 18:22:20 crc kubenswrapper[4760]: I0123 18:22:20.989964 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d67914fd-9d8f-4d74-8bf3-51550c292f95-operator-scripts\") pod \"d67914fd-9d8f-4d74-8bf3-51550c292f95\" (UID: \"d67914fd-9d8f-4d74-8bf3-51550c292f95\") " Jan 23 18:22:20 crc kubenswrapper[4760]: I0123 18:22:20.990093 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpqwx\" (UniqueName: \"kubernetes.io/projected/d67914fd-9d8f-4d74-8bf3-51550c292f95-kube-api-access-zpqwx\") pod \"d67914fd-9d8f-4d74-8bf3-51550c292f95\" (UID: \"d67914fd-9d8f-4d74-8bf3-51550c292f95\") " Jan 23 18:22:20 crc kubenswrapper[4760]: I0123 18:22:20.992850 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d67914fd-9d8f-4d74-8bf3-51550c292f95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d67914fd-9d8f-4d74-8bf3-51550c292f95" (UID: "d67914fd-9d8f-4d74-8bf3-51550c292f95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:22:20 crc kubenswrapper[4760]: I0123 18:22:20.996128 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d67914fd-9d8f-4d74-8bf3-51550c292f95-kube-api-access-zpqwx" (OuterVolumeSpecName: "kube-api-access-zpqwx") pod "d67914fd-9d8f-4d74-8bf3-51550c292f95" (UID: "d67914fd-9d8f-4d74-8bf3-51550c292f95"). InnerVolumeSpecName "kube-api-access-zpqwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.077912 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7xpz8" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.084714 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-90e1-account-create-update-q66bq" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.090705 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c2q94" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.091657 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d67914fd-9d8f-4d74-8bf3-51550c292f95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.091686 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpqwx\" (UniqueName: \"kubernetes.io/projected/d67914fd-9d8f-4d74-8bf3-51550c292f95-kube-api-access-zpqwx\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.097749 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xw828" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.102311 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-414c-account-create-update-c6x2x" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.192478 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq6h8\" (UniqueName: \"kubernetes.io/projected/4996067c-6b4f-4cbf-a418-a77889a7a676-kube-api-access-tq6h8\") pod \"4996067c-6b4f-4cbf-a418-a77889a7a676\" (UID: \"4996067c-6b4f-4cbf-a418-a77889a7a676\") " Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.192597 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24798eb5-c0de-4eec-a09e-d3bb7409e529-operator-scripts\") pod \"24798eb5-c0de-4eec-a09e-d3bb7409e529\" (UID: \"24798eb5-c0de-4eec-a09e-d3bb7409e529\") " Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.192618 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996067c-6b4f-4cbf-a418-a77889a7a676-operator-scripts\") pod \"4996067c-6b4f-4cbf-a418-a77889a7a676\" (UID: \"4996067c-6b4f-4cbf-a418-a77889a7a676\") " Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.192645 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zprkg\" (UniqueName: \"kubernetes.io/projected/b1169238-c8d2-41e2-889e-54f12b6e2b97-kube-api-access-zprkg\") pod \"b1169238-c8d2-41e2-889e-54f12b6e2b97\" (UID: \"b1169238-c8d2-41e2-889e-54f12b6e2b97\") " Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.192706 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2w6f\" (UniqueName: \"kubernetes.io/projected/24798eb5-c0de-4eec-a09e-d3bb7409e529-kube-api-access-c2w6f\") pod \"24798eb5-c0de-4eec-a09e-d3bb7409e529\" (UID: \"24798eb5-c0de-4eec-a09e-d3bb7409e529\") " Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.192724 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpjrv\" (UniqueName: \"kubernetes.io/projected/479e422c-5c59-4786-9b8e-e237f521fdaf-kube-api-access-wpjrv\") pod \"479e422c-5c59-4786-9b8e-e237f521fdaf\" (UID: \"479e422c-5c59-4786-9b8e-e237f521fdaf\") " Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.192829 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1169238-c8d2-41e2-889e-54f12b6e2b97-operator-scripts\") pod \"b1169238-c8d2-41e2-889e-54f12b6e2b97\" (UID: \"b1169238-c8d2-41e2-889e-54f12b6e2b97\") " Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.192850 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d842cdd0-8594-4655-9b54-81dfb7855f67-operator-scripts\") pod \"d842cdd0-8594-4655-9b54-81dfb7855f67\" (UID: \"d842cdd0-8594-4655-9b54-81dfb7855f67\") " Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.192871 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479e422c-5c59-4786-9b8e-e237f521fdaf-operator-scripts\") pod \"479e422c-5c59-4786-9b8e-e237f521fdaf\" (UID: \"479e422c-5c59-4786-9b8e-e237f521fdaf\") " Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.192890 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jthjj\" (UniqueName: \"kubernetes.io/projected/d842cdd0-8594-4655-9b54-81dfb7855f67-kube-api-access-jthjj\") pod \"d842cdd0-8594-4655-9b54-81dfb7855f67\" (UID: \"d842cdd0-8594-4655-9b54-81dfb7855f67\") " Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.193859 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4996067c-6b4f-4cbf-a418-a77889a7a676-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4996067c-6b4f-4cbf-a418-a77889a7a676" (UID: "4996067c-6b4f-4cbf-a418-a77889a7a676"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.194125 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24798eb5-c0de-4eec-a09e-d3bb7409e529-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24798eb5-c0de-4eec-a09e-d3bb7409e529" (UID: "24798eb5-c0de-4eec-a09e-d3bb7409e529"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.194669 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1169238-c8d2-41e2-889e-54f12b6e2b97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1169238-c8d2-41e2-889e-54f12b6e2b97" (UID: "b1169238-c8d2-41e2-889e-54f12b6e2b97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.194908 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/479e422c-5c59-4786-9b8e-e237f521fdaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "479e422c-5c59-4786-9b8e-e237f521fdaf" (UID: "479e422c-5c59-4786-9b8e-e237f521fdaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.194979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d842cdd0-8594-4655-9b54-81dfb7855f67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d842cdd0-8594-4655-9b54-81dfb7855f67" (UID: "d842cdd0-8594-4655-9b54-81dfb7855f67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.196735 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4996067c-6b4f-4cbf-a418-a77889a7a676-kube-api-access-tq6h8" (OuterVolumeSpecName: "kube-api-access-tq6h8") pod "4996067c-6b4f-4cbf-a418-a77889a7a676" (UID: "4996067c-6b4f-4cbf-a418-a77889a7a676"). InnerVolumeSpecName "kube-api-access-tq6h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.196781 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24798eb5-c0de-4eec-a09e-d3bb7409e529-kube-api-access-c2w6f" (OuterVolumeSpecName: "kube-api-access-c2w6f") pod "24798eb5-c0de-4eec-a09e-d3bb7409e529" (UID: "24798eb5-c0de-4eec-a09e-d3bb7409e529"). InnerVolumeSpecName "kube-api-access-c2w6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.197393 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d842cdd0-8594-4655-9b54-81dfb7855f67-kube-api-access-jthjj" (OuterVolumeSpecName: "kube-api-access-jthjj") pod "d842cdd0-8594-4655-9b54-81dfb7855f67" (UID: "d842cdd0-8594-4655-9b54-81dfb7855f67"). InnerVolumeSpecName "kube-api-access-jthjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.198212 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1169238-c8d2-41e2-889e-54f12b6e2b97-kube-api-access-zprkg" (OuterVolumeSpecName: "kube-api-access-zprkg") pod "b1169238-c8d2-41e2-889e-54f12b6e2b97" (UID: "b1169238-c8d2-41e2-889e-54f12b6e2b97"). InnerVolumeSpecName "kube-api-access-zprkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.198593 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/479e422c-5c59-4786-9b8e-e237f521fdaf-kube-api-access-wpjrv" (OuterVolumeSpecName: "kube-api-access-wpjrv") pod "479e422c-5c59-4786-9b8e-e237f521fdaf" (UID: "479e422c-5c59-4786-9b8e-e237f521fdaf"). InnerVolumeSpecName "kube-api-access-wpjrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.295346 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1169238-c8d2-41e2-889e-54f12b6e2b97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.295386 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d842cdd0-8594-4655-9b54-81dfb7855f67-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.295448 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479e422c-5c59-4786-9b8e-e237f521fdaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.295462 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jthjj\" (UniqueName: \"kubernetes.io/projected/d842cdd0-8594-4655-9b54-81dfb7855f67-kube-api-access-jthjj\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.295476 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq6h8\" (UniqueName: \"kubernetes.io/projected/4996067c-6b4f-4cbf-a418-a77889a7a676-kube-api-access-tq6h8\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.295488 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24798eb5-c0de-4eec-a09e-d3bb7409e529-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.295500 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996067c-6b4f-4cbf-a418-a77889a7a676-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.295512 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zprkg\" (UniqueName: \"kubernetes.io/projected/b1169238-c8d2-41e2-889e-54f12b6e2b97-kube-api-access-zprkg\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.295526 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2w6f\" (UniqueName: \"kubernetes.io/projected/24798eb5-c0de-4eec-a09e-d3bb7409e529-kube-api-access-c2w6f\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.295538 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpjrv\" (UniqueName: \"kubernetes.io/projected/479e422c-5c59-4786-9b8e-e237f521fdaf-kube-api-access-wpjrv\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.491628 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-414c-account-create-update-c6x2x" event={"ID":"479e422c-5c59-4786-9b8e-e237f521fdaf","Type":"ContainerDied","Data":"81a1a7375fbed5068855d928fa6d8e3a576b9851dd9884b0f750de13b235a373"} Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.491669 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a1a7375fbed5068855d928fa6d8e3a576b9851dd9884b0f750de13b235a373" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.491733 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-414c-account-create-update-c6x2x" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.497016 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" event={"ID":"d67914fd-9d8f-4d74-8bf3-51550c292f95","Type":"ContainerDied","Data":"966d33bc39667af9a97e19e2216b1e3031d2044fd752701489644dc7dfcdd4c0"} Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.497044 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="966d33bc39667af9a97e19e2216b1e3031d2044fd752701489644dc7dfcdd4c0" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.497083 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9fa7-account-create-update-2x9wm" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.498105 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7xpz8" event={"ID":"4996067c-6b4f-4cbf-a418-a77889a7a676","Type":"ContainerDied","Data":"244dbc4fe82e0650baaa0eee6b1e369530acbadd1498b35401398c8ec59404da"} Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.498126 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="244dbc4fe82e0650baaa0eee6b1e369530acbadd1498b35401398c8ec59404da" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.498176 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7xpz8" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.500040 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c2q94" event={"ID":"24798eb5-c0de-4eec-a09e-d3bb7409e529","Type":"ContainerDied","Data":"dd5a584fde5c08cb792b6a2d72f88da006d8455772a2b5491a300cc702d5e135"} Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.500102 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd5a584fde5c08cb792b6a2d72f88da006d8455772a2b5491a300cc702d5e135" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.500196 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c2q94" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.501870 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xw828" event={"ID":"b1169238-c8d2-41e2-889e-54f12b6e2b97","Type":"ContainerDied","Data":"c9805a6db29510f62d9811c7e0ce2eebfafe0d6d04881545f02d29c08c884fc9"} Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.501907 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9805a6db29510f62d9811c7e0ce2eebfafe0d6d04881545f02d29c08c884fc9" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.501962 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xw828" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.506897 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-90e1-account-create-update-q66bq" event={"ID":"d842cdd0-8594-4655-9b54-81dfb7855f67","Type":"ContainerDied","Data":"29917cb61bd8cf97f013223436c25787d9fc499d8ad13a71ab72d7c420a89bdf"} Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.506939 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29917cb61bd8cf97f013223436c25787d9fc499d8ad13a71ab72d7c420a89bdf" Jan 23 18:22:21 crc kubenswrapper[4760]: I0123 18:22:21.506997 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-90e1-account-create-update-q66bq" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.298825 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.413469 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-combined-ca-bundle\") pod \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.413596 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-sg-core-conf-yaml\") pod \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.413628 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-log-httpd\") pod \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.413648 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-scripts\") pod \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.413684 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-run-httpd\") pod \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.413765 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9x2g\" (UniqueName: \"kubernetes.io/projected/54db138a-d54f-4224-a4eb-00fc0f39ed3c-kube-api-access-p9x2g\") pod \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.413857 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-config-data\") pod \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\" (UID: \"54db138a-d54f-4224-a4eb-00fc0f39ed3c\") " Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.414826 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "54db138a-d54f-4224-a4eb-00fc0f39ed3c" (UID: "54db138a-d54f-4224-a4eb-00fc0f39ed3c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.414958 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "54db138a-d54f-4224-a4eb-00fc0f39ed3c" (UID: "54db138a-d54f-4224-a4eb-00fc0f39ed3c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.421676 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54db138a-d54f-4224-a4eb-00fc0f39ed3c-kube-api-access-p9x2g" (OuterVolumeSpecName: "kube-api-access-p9x2g") pod "54db138a-d54f-4224-a4eb-00fc0f39ed3c" (UID: "54db138a-d54f-4224-a4eb-00fc0f39ed3c"). InnerVolumeSpecName "kube-api-access-p9x2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.442693 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-scripts" (OuterVolumeSpecName: "scripts") pod "54db138a-d54f-4224-a4eb-00fc0f39ed3c" (UID: "54db138a-d54f-4224-a4eb-00fc0f39ed3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.483696 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "54db138a-d54f-4224-a4eb-00fc0f39ed3c" (UID: "54db138a-d54f-4224-a4eb-00fc0f39ed3c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.515292 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.515322 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.515331 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.515340 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54db138a-d54f-4224-a4eb-00fc0f39ed3c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.515349 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9x2g\" (UniqueName: \"kubernetes.io/projected/54db138a-d54f-4224-a4eb-00fc0f39ed3c-kube-api-access-p9x2g\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.542771 4760 generic.go:334] "Generic (PLEG): container finished" podID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerID="015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638" exitCode=0 Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.542827 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54db138a-d54f-4224-a4eb-00fc0f39ed3c","Type":"ContainerDied","Data":"015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638"} Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.542861 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54db138a-d54f-4224-a4eb-00fc0f39ed3c","Type":"ContainerDied","Data":"f49b4ffa826de18fecb8b056db54b35e5db3622c5d6ccaabfd22fbd658e97c5c"} Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.542883 4760 scope.go:117] "RemoveContainer" containerID="96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.543051 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.593735 4760 scope.go:117] "RemoveContainer" containerID="70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.609533 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54db138a-d54f-4224-a4eb-00fc0f39ed3c" (UID: "54db138a-d54f-4224-a4eb-00fc0f39ed3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.625171 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.635478 4760 scope.go:117] "RemoveContainer" containerID="015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.639997 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-config-data" (OuterVolumeSpecName: "config-data") pod "54db138a-d54f-4224-a4eb-00fc0f39ed3c" (UID: "54db138a-d54f-4224-a4eb-00fc0f39ed3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.655162 4760 scope.go:117] "RemoveContainer" containerID="cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.671497 4760 scope.go:117] "RemoveContainer" containerID="96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.671946 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743\": container with ID starting with 96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743 not found: ID does not exist" containerID="96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.671988 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743"} err="failed to get container status \"96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743\": rpc error: code = NotFound desc = could not find container \"96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743\": container with ID starting with 96a0056681c5fb68ba1ab5c81e3a14afb00c6d8d5f5a111a7baf908d5ddf4743 not found: ID does not exist" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.672030 4760 scope.go:117] "RemoveContainer" containerID="70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.672399 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d\": container with ID starting with 70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d not found: ID does not exist" containerID="70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.672442 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d"} err="failed to get container status \"70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d\": rpc error: code = NotFound desc = could not find container \"70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d\": container with ID starting with 70d7751e614926fe03df36c1116b50d976e59d996a0e2fe78a7a39423ef1ec9d not found: ID does not exist" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.672466 4760 scope.go:117] "RemoveContainer" containerID="015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.673211 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638\": container with ID starting with 015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638 not found: ID does not exist" containerID="015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.673237 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638"} err="failed to get container status \"015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638\": rpc error: code = NotFound desc = could not find container \"015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638\": container with ID starting with 015c42ab63a15a946efa1f626e9f317e10c4d550da42e448173dcf0a75e1d638 not found: ID does not exist" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.673253 4760 scope.go:117] "RemoveContainer" containerID="cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.673605 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a\": container with ID starting with cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a not found: ID does not exist" containerID="cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.673641 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a"} err="failed to get container status \"cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a\": rpc error: code = NotFound desc = could not find container \"cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a\": container with ID starting with cf2301d53e253913e3f9a73625f5c5bc4d1ff235ec3c48b4690cf22ba433150a not found: ID does not exist" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.726786 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db138a-d54f-4224-a4eb-00fc0f39ed3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.878936 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.917145 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.933035 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.933787 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="ceilometer-central-agent" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.933811 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="ceilometer-central-agent" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.933828 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d842cdd0-8594-4655-9b54-81dfb7855f67" containerName="mariadb-account-create-update" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.933837 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d842cdd0-8594-4655-9b54-81dfb7855f67" containerName="mariadb-account-create-update" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.933847 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="ceilometer-notification-agent" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.933856 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="ceilometer-notification-agent" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.933872 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67914fd-9d8f-4d74-8bf3-51550c292f95" containerName="mariadb-account-create-update" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.933879 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67914fd-9d8f-4d74-8bf3-51550c292f95" containerName="mariadb-account-create-update" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.933904 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24798eb5-c0de-4eec-a09e-d3bb7409e529" containerName="mariadb-database-create" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.933912 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24798eb5-c0de-4eec-a09e-d3bb7409e529" containerName="mariadb-database-create" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.933921 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4996067c-6b4f-4cbf-a418-a77889a7a676" containerName="mariadb-database-create" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.933928 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4996067c-6b4f-4cbf-a418-a77889a7a676" containerName="mariadb-database-create" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.933939 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1169238-c8d2-41e2-889e-54f12b6e2b97" containerName="mariadb-database-create" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.933946 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1169238-c8d2-41e2-889e-54f12b6e2b97" containerName="mariadb-database-create" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.933957 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="479e422c-5c59-4786-9b8e-e237f521fdaf" containerName="mariadb-account-create-update" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.933967 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="479e422c-5c59-4786-9b8e-e237f521fdaf" containerName="mariadb-account-create-update" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.933978 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="sg-core" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.933985 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="sg-core" Jan 23 18:22:22 crc kubenswrapper[4760]: E0123 18:22:22.933998 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="proxy-httpd" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934005 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="proxy-httpd" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934211 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="ceilometer-central-agent" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934225 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d842cdd0-8594-4655-9b54-81dfb7855f67" containerName="mariadb-account-create-update" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934236 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d67914fd-9d8f-4d74-8bf3-51550c292f95" containerName="mariadb-account-create-update" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934247 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="ceilometer-notification-agent" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934261 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="sg-core" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934268 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="24798eb5-c0de-4eec-a09e-d3bb7409e529" containerName="mariadb-database-create" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934282 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4996067c-6b4f-4cbf-a418-a77889a7a676" containerName="mariadb-database-create" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934299 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" containerName="proxy-httpd" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934308 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="479e422c-5c59-4786-9b8e-e237f521fdaf" containerName="mariadb-account-create-update" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.934321 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1169238-c8d2-41e2-889e-54f12b6e2b97" containerName="mariadb-database-create" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.936225 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.940803 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.941186 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:22 crc kubenswrapper[4760]: I0123 18:22:22.941243 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.034779 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.034829 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfxd\" (UniqueName: \"kubernetes.io/projected/c3565988-8d28-4f69-8134-bf7403569bfc-kube-api-access-5dfxd\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.034857 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-log-httpd\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.034877 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.034945 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-run-httpd\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.034971 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-scripts\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.035144 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-config-data\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.136794 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-run-httpd\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.136855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-scripts\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.136901 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-config-data\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.136943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.136978 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dfxd\" (UniqueName: \"kubernetes.io/projected/c3565988-8d28-4f69-8134-bf7403569bfc-kube-api-access-5dfxd\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.137006 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-log-httpd\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.137029 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.137326 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-run-httpd\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.137907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-log-httpd\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.142392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-scripts\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.142935 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.143340 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.151567 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-config-data\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.156087 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dfxd\" (UniqueName: \"kubernetes.io/projected/c3565988-8d28-4f69-8134-bf7403569bfc-kube-api-access-5dfxd\") pod \"ceilometer-0\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.254572 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.610511 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54db138a-d54f-4224-a4eb-00fc0f39ed3c" path="/var/lib/kubelet/pods/54db138a-d54f-4224-a4eb-00fc0f39ed3c/volumes" Jan 23 18:22:23 crc kubenswrapper[4760]: W0123 18:22:23.715242 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3565988_8d28_4f69_8134_bf7403569bfc.slice/crio-bc5b08ff5d4a197f5b227541239782b2923758fad479e8d613c3f0549c9e3efb WatchSource:0}: Error finding container bc5b08ff5d4a197f5b227541239782b2923758fad479e8d613c3f0549c9e3efb: Status 404 returned error can't find the container with id bc5b08ff5d4a197f5b227541239782b2923758fad479e8d613c3f0549c9e3efb Jan 23 18:22:23 crc kubenswrapper[4760]: I0123 18:22:23.718512 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:24 crc kubenswrapper[4760]: I0123 18:22:24.562547 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3565988-8d28-4f69-8134-bf7403569bfc","Type":"ContainerStarted","Data":"9fb6a2179e8a50a59587a2e6dec9a20c83a801c69742dc33f54bce46409ba55f"} Jan 23 18:22:24 crc kubenswrapper[4760]: I0123 18:22:24.562994 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3565988-8d28-4f69-8134-bf7403569bfc","Type":"ContainerStarted","Data":"bc5b08ff5d4a197f5b227541239782b2923758fad479e8d613c3f0549c9e3efb"} Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.279467 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qxn5j"] Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.281181 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.285154 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qxn5j"] Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.286929 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.287163 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.293842 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jjzkd" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.393919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-config-data\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.393986 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-scripts\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.394019 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psp5n\" (UniqueName: \"kubernetes.io/projected/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-kube-api-access-psp5n\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.394342 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.496595 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.496685 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-config-data\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.496728 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-scripts\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.496754 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psp5n\" (UniqueName: \"kubernetes.io/projected/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-kube-api-access-psp5n\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.502207 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-config-data\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.502421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-scripts\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.506199 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.517868 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psp5n\" (UniqueName: \"kubernetes.io/projected/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-kube-api-access-psp5n\") pod \"nova-cell0-conductor-db-sync-qxn5j\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.580758 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3565988-8d28-4f69-8134-bf7403569bfc","Type":"ContainerStarted","Data":"ac7b0a00bfad4dd9f74fcf9266cf1cd953bb217e9a076937f579243e7308e481"} Jan 23 18:22:26 crc kubenswrapper[4760]: I0123 18:22:26.600035 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:27 crc kubenswrapper[4760]: I0123 18:22:27.077991 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qxn5j"] Jan 23 18:22:27 crc kubenswrapper[4760]: W0123 18:22:27.078500 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e3ece7e_d0b3_4e4c_816a_1cd699bf8a4e.slice/crio-f7716e08a4c2c7020f9d1880cb406d5918812538b7d03c8229033025a184f7d3 WatchSource:0}: Error finding container f7716e08a4c2c7020f9d1880cb406d5918812538b7d03c8229033025a184f7d3: Status 404 returned error can't find the container with id f7716e08a4c2c7020f9d1880cb406d5918812538b7d03c8229033025a184f7d3 Jan 23 18:22:27 crc kubenswrapper[4760]: I0123 18:22:27.617326 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qxn5j" event={"ID":"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e","Type":"ContainerStarted","Data":"f7716e08a4c2c7020f9d1880cb406d5918812538b7d03c8229033025a184f7d3"} Jan 23 18:22:27 crc kubenswrapper[4760]: I0123 18:22:27.617371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3565988-8d28-4f69-8134-bf7403569bfc","Type":"ContainerStarted","Data":"b1078beaa13849c74613e7ac34943ce6bf2e0e4130d971377f5737d676ef4667"} Jan 23 18:22:29 crc kubenswrapper[4760]: I0123 18:22:29.613249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3565988-8d28-4f69-8134-bf7403569bfc","Type":"ContainerStarted","Data":"cacfc1d0b54e5f264c67e719f68896fc6e849c4b4206f0ef84dcdf36cde39953"} Jan 23 18:22:29 crc kubenswrapper[4760]: I0123 18:22:29.613741 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:22:29 crc kubenswrapper[4760]: I0123 18:22:29.643220 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.154968956 podStartE2EDuration="7.643204404s" podCreationTimestamp="2026-01-23 18:22:22 +0000 UTC" firstStartedPulling="2026-01-23 18:22:23.717872119 +0000 UTC m=+1286.720330062" lastFinishedPulling="2026-01-23 18:22:29.206107577 +0000 UTC m=+1292.208565510" observedRunningTime="2026-01-23 18:22:29.639510626 +0000 UTC m=+1292.641968559" watchObservedRunningTime="2026-01-23 18:22:29.643204404 +0000 UTC m=+1292.645662337" Jan 23 18:22:30 crc kubenswrapper[4760]: I0123 18:22:30.807758 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:31 crc kubenswrapper[4760]: I0123 18:22:31.638130 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="ceilometer-central-agent" containerID="cri-o://9fb6a2179e8a50a59587a2e6dec9a20c83a801c69742dc33f54bce46409ba55f" gracePeriod=30 Jan 23 18:22:31 crc kubenswrapper[4760]: I0123 18:22:31.639087 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="proxy-httpd" containerID="cri-o://cacfc1d0b54e5f264c67e719f68896fc6e849c4b4206f0ef84dcdf36cde39953" gracePeriod=30 Jan 23 18:22:31 crc kubenswrapper[4760]: I0123 18:22:31.639114 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="ceilometer-notification-agent" containerID="cri-o://ac7b0a00bfad4dd9f74fcf9266cf1cd953bb217e9a076937f579243e7308e481" gracePeriod=30 Jan 23 18:22:31 crc kubenswrapper[4760]: I0123 18:22:31.639207 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="sg-core" containerID="cri-o://b1078beaa13849c74613e7ac34943ce6bf2e0e4130d971377f5737d676ef4667" gracePeriod=30 Jan 23 18:22:32 crc kubenswrapper[4760]: I0123 18:22:32.652037 4760 generic.go:334] "Generic (PLEG): container finished" podID="c3565988-8d28-4f69-8134-bf7403569bfc" containerID="cacfc1d0b54e5f264c67e719f68896fc6e849c4b4206f0ef84dcdf36cde39953" exitCode=0 Jan 23 18:22:32 crc kubenswrapper[4760]: I0123 18:22:32.652073 4760 generic.go:334] "Generic (PLEG): container finished" podID="c3565988-8d28-4f69-8134-bf7403569bfc" containerID="b1078beaa13849c74613e7ac34943ce6bf2e0e4130d971377f5737d676ef4667" exitCode=2 Jan 23 18:22:32 crc kubenswrapper[4760]: I0123 18:22:32.652081 4760 generic.go:334] "Generic (PLEG): container finished" podID="c3565988-8d28-4f69-8134-bf7403569bfc" containerID="ac7b0a00bfad4dd9f74fcf9266cf1cd953bb217e9a076937f579243e7308e481" exitCode=0 Jan 23 18:22:32 crc kubenswrapper[4760]: I0123 18:22:32.652089 4760 generic.go:334] "Generic (PLEG): container finished" podID="c3565988-8d28-4f69-8134-bf7403569bfc" containerID="9fb6a2179e8a50a59587a2e6dec9a20c83a801c69742dc33f54bce46409ba55f" exitCode=0 Jan 23 18:22:32 crc kubenswrapper[4760]: I0123 18:22:32.652109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3565988-8d28-4f69-8134-bf7403569bfc","Type":"ContainerDied","Data":"cacfc1d0b54e5f264c67e719f68896fc6e849c4b4206f0ef84dcdf36cde39953"} Jan 23 18:22:32 crc kubenswrapper[4760]: I0123 18:22:32.652132 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3565988-8d28-4f69-8134-bf7403569bfc","Type":"ContainerDied","Data":"b1078beaa13849c74613e7ac34943ce6bf2e0e4130d971377f5737d676ef4667"} Jan 23 18:22:32 crc kubenswrapper[4760]: I0123 18:22:32.652142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3565988-8d28-4f69-8134-bf7403569bfc","Type":"ContainerDied","Data":"ac7b0a00bfad4dd9f74fcf9266cf1cd953bb217e9a076937f579243e7308e481"} Jan 23 18:22:32 crc kubenswrapper[4760]: I0123 18:22:32.652151 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3565988-8d28-4f69-8134-bf7403569bfc","Type":"ContainerDied","Data":"9fb6a2179e8a50a59587a2e6dec9a20c83a801c69742dc33f54bce46409ba55f"} Jan 23 18:22:36 crc kubenswrapper[4760]: I0123 18:22:36.954354 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.104433 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-run-httpd\") pod \"c3565988-8d28-4f69-8134-bf7403569bfc\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.104511 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-log-httpd\") pod \"c3565988-8d28-4f69-8134-bf7403569bfc\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.104535 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-scripts\") pod \"c3565988-8d28-4f69-8134-bf7403569bfc\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.104670 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-sg-core-conf-yaml\") pod \"c3565988-8d28-4f69-8134-bf7403569bfc\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.104709 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-config-data\") pod \"c3565988-8d28-4f69-8134-bf7403569bfc\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.104725 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-combined-ca-bundle\") pod \"c3565988-8d28-4f69-8134-bf7403569bfc\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.104777 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dfxd\" (UniqueName: \"kubernetes.io/projected/c3565988-8d28-4f69-8134-bf7403569bfc-kube-api-access-5dfxd\") pod \"c3565988-8d28-4f69-8134-bf7403569bfc\" (UID: \"c3565988-8d28-4f69-8134-bf7403569bfc\") " Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.105144 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c3565988-8d28-4f69-8134-bf7403569bfc" (UID: "c3565988-8d28-4f69-8134-bf7403569bfc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.105275 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c3565988-8d28-4f69-8134-bf7403569bfc" (UID: "c3565988-8d28-4f69-8134-bf7403569bfc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.110193 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3565988-8d28-4f69-8134-bf7403569bfc-kube-api-access-5dfxd" (OuterVolumeSpecName: "kube-api-access-5dfxd") pod "c3565988-8d28-4f69-8134-bf7403569bfc" (UID: "c3565988-8d28-4f69-8134-bf7403569bfc"). InnerVolumeSpecName "kube-api-access-5dfxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.110334 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-scripts" (OuterVolumeSpecName: "scripts") pod "c3565988-8d28-4f69-8134-bf7403569bfc" (UID: "c3565988-8d28-4f69-8134-bf7403569bfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.129048 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c3565988-8d28-4f69-8134-bf7403569bfc" (UID: "c3565988-8d28-4f69-8134-bf7403569bfc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.168817 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3565988-8d28-4f69-8134-bf7403569bfc" (UID: "c3565988-8d28-4f69-8134-bf7403569bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.190809 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-config-data" (OuterVolumeSpecName: "config-data") pod "c3565988-8d28-4f69-8134-bf7403569bfc" (UID: "c3565988-8d28-4f69-8134-bf7403569bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.207089 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.207121 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.207130 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.207140 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.207148 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3565988-8d28-4f69-8134-bf7403569bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.207157 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dfxd\" (UniqueName: \"kubernetes.io/projected/c3565988-8d28-4f69-8134-bf7403569bfc-kube-api-access-5dfxd\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.207166 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3565988-8d28-4f69-8134-bf7403569bfc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.697146 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qxn5j" event={"ID":"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e","Type":"ContainerStarted","Data":"28f41d6432980515ec2213a5cbf422a665c4f13ef0f4950cdc229e2f0bd731cf"} Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.702814 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3565988-8d28-4f69-8134-bf7403569bfc","Type":"ContainerDied","Data":"bc5b08ff5d4a197f5b227541239782b2923758fad479e8d613c3f0549c9e3efb"} Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.702858 4760 scope.go:117] "RemoveContainer" containerID="cacfc1d0b54e5f264c67e719f68896fc6e849c4b4206f0ef84dcdf36cde39953" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.702940 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.726173 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qxn5j" podStartSLOduration=2.088032887 podStartE2EDuration="11.726145362s" podCreationTimestamp="2026-01-23 18:22:26 +0000 UTC" firstStartedPulling="2026-01-23 18:22:27.080400807 +0000 UTC m=+1290.082858740" lastFinishedPulling="2026-01-23 18:22:36.718513282 +0000 UTC m=+1299.720971215" observedRunningTime="2026-01-23 18:22:37.718784275 +0000 UTC m=+1300.721242218" watchObservedRunningTime="2026-01-23 18:22:37.726145362 +0000 UTC m=+1300.728603305" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.738500 4760 scope.go:117] "RemoveContainer" containerID="b1078beaa13849c74613e7ac34943ce6bf2e0e4130d971377f5737d676ef4667" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.745170 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.762363 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.772826 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:37 crc kubenswrapper[4760]: E0123 18:22:37.774121 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="ceilometer-notification-agent" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.774151 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="ceilometer-notification-agent" Jan 23 18:22:37 crc kubenswrapper[4760]: E0123 18:22:37.774170 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="ceilometer-central-agent" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.774177 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="ceilometer-central-agent" Jan 23 18:22:37 crc kubenswrapper[4760]: E0123 18:22:37.774209 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="proxy-httpd" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.774218 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="proxy-httpd" Jan 23 18:22:37 crc kubenswrapper[4760]: E0123 18:22:37.774239 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="sg-core" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.774249 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="sg-core" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.774442 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="ceilometer-notification-agent" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.774474 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="proxy-httpd" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.775755 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="ceilometer-central-agent" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.775782 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" containerName="sg-core" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.778305 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.786140 4760 scope.go:117] "RemoveContainer" containerID="ac7b0a00bfad4dd9f74fcf9266cf1cd953bb217e9a076937f579243e7308e481" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.786341 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.786367 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.787370 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.818117 4760 scope.go:117] "RemoveContainer" containerID="9fb6a2179e8a50a59587a2e6dec9a20c83a801c69742dc33f54bce46409ba55f" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.938949 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ppqq\" (UniqueName: \"kubernetes.io/projected/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-kube-api-access-2ppqq\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.939023 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-config-data\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.939088 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-run-httpd\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.939114 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-scripts\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.939139 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.939163 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-log-httpd\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:37 crc kubenswrapper[4760]: I0123 18:22:37.939200 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.041000 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ppqq\" (UniqueName: \"kubernetes.io/projected/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-kube-api-access-2ppqq\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.041094 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-config-data\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.041165 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-run-httpd\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.041188 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-scripts\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.041206 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.041222 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-log-httpd\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.041257 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.042789 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-run-httpd\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.042810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-log-httpd\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.046558 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-scripts\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.048147 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.048905 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.048915 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-config-data\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.071341 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ppqq\" (UniqueName: \"kubernetes.io/projected/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-kube-api-access-2ppqq\") pod \"ceilometer-0\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.100402 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.558609 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:22:38 crc kubenswrapper[4760]: I0123 18:22:38.712856 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489","Type":"ContainerStarted","Data":"1d8b217c793d86d958f7dd6447ab28afbdbca2e7d6f0d213f1a6f870d7b9da0c"} Jan 23 18:22:39 crc kubenswrapper[4760]: I0123 18:22:39.604683 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3565988-8d28-4f69-8134-bf7403569bfc" path="/var/lib/kubelet/pods/c3565988-8d28-4f69-8134-bf7403569bfc/volumes" Jan 23 18:22:39 crc kubenswrapper[4760]: I0123 18:22:39.720734 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489","Type":"ContainerStarted","Data":"221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e"} Jan 23 18:22:40 crc kubenswrapper[4760]: I0123 18:22:40.729905 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489","Type":"ContainerStarted","Data":"94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3"} Jan 23 18:22:43 crc kubenswrapper[4760]: I0123 18:22:43.774074 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489","Type":"ContainerStarted","Data":"4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5"} Jan 23 18:22:46 crc kubenswrapper[4760]: I0123 18:22:46.801264 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489","Type":"ContainerStarted","Data":"15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445"} Jan 23 18:22:46 crc kubenswrapper[4760]: I0123 18:22:46.802951 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:22:46 crc kubenswrapper[4760]: I0123 18:22:46.826354 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.417998782 podStartE2EDuration="9.826335254s" podCreationTimestamp="2026-01-23 18:22:37 +0000 UTC" firstStartedPulling="2026-01-23 18:22:38.563136164 +0000 UTC m=+1301.565594097" lastFinishedPulling="2026-01-23 18:22:45.971472596 +0000 UTC m=+1308.973930569" observedRunningTime="2026-01-23 18:22:46.820871959 +0000 UTC m=+1309.823329902" watchObservedRunningTime="2026-01-23 18:22:46.826335254 +0000 UTC m=+1309.828793197" Jan 23 18:22:50 crc kubenswrapper[4760]: I0123 18:22:50.844265 4760 generic.go:334] "Generic (PLEG): container finished" podID="4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e" containerID="28f41d6432980515ec2213a5cbf422a665c4f13ef0f4950cdc229e2f0bd731cf" exitCode=0 Jan 23 18:22:50 crc kubenswrapper[4760]: I0123 18:22:50.844362 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qxn5j" event={"ID":"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e","Type":"ContainerDied","Data":"28f41d6432980515ec2213a5cbf422a665c4f13ef0f4950cdc229e2f0bd731cf"} Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.200971 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.306347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-config-data\") pod \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.306393 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-scripts\") pod \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.306535 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-combined-ca-bundle\") pod \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.306648 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psp5n\" (UniqueName: \"kubernetes.io/projected/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-kube-api-access-psp5n\") pod \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\" (UID: \"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e\") " Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.312775 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-scripts" (OuterVolumeSpecName: "scripts") pod "4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e" (UID: "4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.316061 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-kube-api-access-psp5n" (OuterVolumeSpecName: "kube-api-access-psp5n") pod "4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e" (UID: "4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e"). InnerVolumeSpecName "kube-api-access-psp5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.330772 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-config-data" (OuterVolumeSpecName: "config-data") pod "4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e" (UID: "4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.348935 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e" (UID: "4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.408346 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.408384 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.408397 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.408440 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psp5n\" (UniqueName: \"kubernetes.io/projected/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e-kube-api-access-psp5n\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.865793 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qxn5j" event={"ID":"4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e","Type":"ContainerDied","Data":"f7716e08a4c2c7020f9d1880cb406d5918812538b7d03c8229033025a184f7d3"} Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.866162 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7716e08a4c2c7020f9d1880cb406d5918812538b7d03c8229033025a184f7d3" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.865850 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qxn5j" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.974238 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 18:22:52 crc kubenswrapper[4760]: E0123 18:22:52.974689 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e" containerName="nova-cell0-conductor-db-sync" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.974732 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e" containerName="nova-cell0-conductor-db-sync" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.974894 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e" containerName="nova-cell0-conductor-db-sync" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.975481 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.979960 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 18:22:52 crc kubenswrapper[4760]: I0123 18:22:52.980185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jjzkd" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.005655 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.122216 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e7c803-ac52-4cb2-b29e-14973d73c522-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"12e7c803-ac52-4cb2-b29e-14973d73c522\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.122288 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqzd\" (UniqueName: \"kubernetes.io/projected/12e7c803-ac52-4cb2-b29e-14973d73c522-kube-api-access-7pqzd\") pod \"nova-cell0-conductor-0\" (UID: \"12e7c803-ac52-4cb2-b29e-14973d73c522\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.122350 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e7c803-ac52-4cb2-b29e-14973d73c522-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"12e7c803-ac52-4cb2-b29e-14973d73c522\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.223759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e7c803-ac52-4cb2-b29e-14973d73c522-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"12e7c803-ac52-4cb2-b29e-14973d73c522\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.223808 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqzd\" (UniqueName: \"kubernetes.io/projected/12e7c803-ac52-4cb2-b29e-14973d73c522-kube-api-access-7pqzd\") pod \"nova-cell0-conductor-0\" (UID: \"12e7c803-ac52-4cb2-b29e-14973d73c522\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.223855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e7c803-ac52-4cb2-b29e-14973d73c522-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"12e7c803-ac52-4cb2-b29e-14973d73c522\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.227639 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e7c803-ac52-4cb2-b29e-14973d73c522-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"12e7c803-ac52-4cb2-b29e-14973d73c522\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.227691 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e7c803-ac52-4cb2-b29e-14973d73c522-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"12e7c803-ac52-4cb2-b29e-14973d73c522\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.240574 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqzd\" (UniqueName: \"kubernetes.io/projected/12e7c803-ac52-4cb2-b29e-14973d73c522-kube-api-access-7pqzd\") pod \"nova-cell0-conductor-0\" (UID: \"12e7c803-ac52-4cb2-b29e-14973d73c522\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.294452 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.752278 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 18:22:53 crc kubenswrapper[4760]: W0123 18:22:53.758196 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12e7c803_ac52_4cb2_b29e_14973d73c522.slice/crio-0593030abd5c73e3c20dacaa314a8325c838f37e7ee2d6e6c32d2e98a056a6b9 WatchSource:0}: Error finding container 0593030abd5c73e3c20dacaa314a8325c838f37e7ee2d6e6c32d2e98a056a6b9: Status 404 returned error can't find the container with id 0593030abd5c73e3c20dacaa314a8325c838f37e7ee2d6e6c32d2e98a056a6b9 Jan 23 18:22:53 crc kubenswrapper[4760]: I0123 18:22:53.876297 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"12e7c803-ac52-4cb2-b29e-14973d73c522","Type":"ContainerStarted","Data":"0593030abd5c73e3c20dacaa314a8325c838f37e7ee2d6e6c32d2e98a056a6b9"} Jan 23 18:22:54 crc kubenswrapper[4760]: I0123 18:22:54.888173 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"12e7c803-ac52-4cb2-b29e-14973d73c522","Type":"ContainerStarted","Data":"2a62b60eaf0ff7bb53510d2e6a6ba9cb66e67e4d92461bc2dd3e68cde0d9712b"} Jan 23 18:22:54 crc kubenswrapper[4760]: I0123 18:22:54.888590 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:54 crc kubenswrapper[4760]: I0123 18:22:54.913129 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.913062412 podStartE2EDuration="2.913062412s" podCreationTimestamp="2026-01-23 18:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:22:54.906510487 +0000 UTC m=+1317.908968430" watchObservedRunningTime="2026-01-23 18:22:54.913062412 +0000 UTC m=+1317.915520345" Jan 23 18:22:58 crc kubenswrapper[4760]: I0123 18:22:58.330081 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 18:22:58 crc kubenswrapper[4760]: I0123 18:22:58.853737 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rh87v"] Jan 23 18:22:58 crc kubenswrapper[4760]: I0123 18:22:58.855200 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:58 crc kubenswrapper[4760]: I0123 18:22:58.858112 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 18:22:58 crc kubenswrapper[4760]: I0123 18:22:58.858113 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 18:22:58 crc kubenswrapper[4760]: I0123 18:22:58.872307 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rh87v"] Jan 23 18:22:58 crc kubenswrapper[4760]: I0123 18:22:58.944853 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:58 crc kubenswrapper[4760]: I0123 18:22:58.944930 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-scripts\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:58 crc kubenswrapper[4760]: I0123 18:22:58.944970 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-config-data\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:58 crc kubenswrapper[4760]: I0123 18:22:58.945088 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68jmg\" (UniqueName: \"kubernetes.io/projected/219398d6-2967-4d15-b78a-3ee0165aff71-kube-api-access-68jmg\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.046283 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.046357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-scripts\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.046395 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-config-data\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.046493 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68jmg\" (UniqueName: \"kubernetes.io/projected/219398d6-2967-4d15-b78a-3ee0165aff71-kube-api-access-68jmg\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.053592 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.055686 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-scripts\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.061133 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-config-data\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.066162 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.068006 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.070196 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.078100 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68jmg\" (UniqueName: \"kubernetes.io/projected/219398d6-2967-4d15-b78a-3ee0165aff71-kube-api-access-68jmg\") pod \"nova-cell0-cell-mapping-rh87v\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.081486 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.082881 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.086343 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.132008 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.149339 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.165217 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.166853 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.172996 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.173990 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.183554 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253091 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcmvd\" (UniqueName: \"kubernetes.io/projected/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-kube-api-access-fcmvd\") pod \"nova-scheduler-0\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253291 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-config-data\") pod \"nova-scheduler-0\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253313 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253330 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253350 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-config-data\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253368 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-config-data\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253430 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkxlj\" (UniqueName: \"kubernetes.io/projected/9b1d0469-98c9-4d87-af6c-3f7057823848-kube-api-access-mkxlj\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253555 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a44844-5883-4db5-bd19-051e15f7c1a6-logs\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253626 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b1d0469-98c9-4d87-af6c-3f7057823848-logs\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.253728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czsz2\" (UniqueName: \"kubernetes.io/projected/57a44844-5883-4db5-bd19-051e15f7c1a6-kube-api-access-czsz2\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.260535 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2hx46"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.261991 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.296311 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2hx46"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.343150 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.345509 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.347508 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.353714 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.355651 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-config\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.355698 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkxlj\" (UniqueName: \"kubernetes.io/projected/9b1d0469-98c9-4d87-af6c-3f7057823848-kube-api-access-mkxlj\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.355723 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.355741 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a44844-5883-4db5-bd19-051e15f7c1a6-logs\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.355767 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b1d0469-98c9-4d87-af6c-3f7057823848-logs\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.355782 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.356686 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.356746 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.356786 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czsz2\" (UniqueName: \"kubernetes.io/projected/57a44844-5883-4db5-bd19-051e15f7c1a6-kube-api-access-czsz2\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.356823 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcmvd\" (UniqueName: \"kubernetes.io/projected/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-kube-api-access-fcmvd\") pod \"nova-scheduler-0\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.356854 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/2623ed5b-d192-4949-af6b-86b2250772da-kube-api-access-jj26b\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.356904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-config-data\") pod \"nova-scheduler-0\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.356930 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.356946 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.356964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.356985 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-dns-svc\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.357011 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-config-data\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.357033 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6j6n\" (UniqueName: \"kubernetes.io/projected/8cbd4dc9-434e-44e4-b584-3644467dded9-kube-api-access-q6j6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.357055 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-config-data\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.360581 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.360824 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a44844-5883-4db5-bd19-051e15f7c1a6-logs\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.361118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b1d0469-98c9-4d87-af6c-3f7057823848-logs\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.364848 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-config-data\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.365182 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.368576 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-config-data\") pod \"nova-scheduler-0\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.369787 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.378920 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-config-data\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.392088 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czsz2\" (UniqueName: \"kubernetes.io/projected/57a44844-5883-4db5-bd19-051e15f7c1a6-kube-api-access-czsz2\") pod \"nova-api-0\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.397042 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcmvd\" (UniqueName: \"kubernetes.io/projected/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-kube-api-access-fcmvd\") pod \"nova-scheduler-0\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.409173 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkxlj\" (UniqueName: \"kubernetes.io/projected/9b1d0469-98c9-4d87-af6c-3f7057823848-kube-api-access-mkxlj\") pod \"nova-metadata-0\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.460158 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.460214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-dns-svc\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.460267 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6j6n\" (UniqueName: \"kubernetes.io/projected/8cbd4dc9-434e-44e4-b584-3644467dded9-kube-api-access-q6j6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.460360 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-config\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.460472 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.460521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.460575 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.460639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/2623ed5b-d192-4949-af6b-86b2250772da-kube-api-access-jj26b\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.463781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.463832 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.465003 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-config\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.469790 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-dns-svc\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.476177 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.476560 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.477551 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.481190 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6j6n\" (UniqueName: \"kubernetes.io/projected/8cbd4dc9-434e-44e4-b584-3644467dded9-kube-api-access-q6j6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.481956 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/2623ed5b-d192-4949-af6b-86b2250772da-kube-api-access-jj26b\") pod \"dnsmasq-dns-566b5b7845-2hx46\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.496774 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.509680 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.637357 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.683444 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.764937 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rh87v"] Jan 23 18:22:59 crc kubenswrapper[4760]: W0123 18:22:59.787394 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod219398d6_2967_4d15_b78a_3ee0165aff71.slice/crio-c63618f6b5057fbe15234d101d95cad13d6250a26840a768484b4490c5ea6b7b WatchSource:0}: Error finding container c63618f6b5057fbe15234d101d95cad13d6250a26840a768484b4490c5ea6b7b: Status 404 returned error can't find the container with id c63618f6b5057fbe15234d101d95cad13d6250a26840a768484b4490c5ea6b7b Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.936066 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4fvwn"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.938634 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.943105 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.943287 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.946948 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rh87v" event={"ID":"219398d6-2967-4d15-b78a-3ee0165aff71","Type":"ContainerStarted","Data":"c63618f6b5057fbe15234d101d95cad13d6250a26840a768484b4490c5ea6b7b"} Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.958720 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4fvwn"] Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.988944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.989007 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-scripts\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.989064 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-config-data\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:22:59 crc kubenswrapper[4760]: I0123 18:22:59.989097 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrhf\" (UniqueName: \"kubernetes.io/projected/ad627578-31c9-4d00-88a8-4dae148f1ae5-kube-api-access-nbrhf\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.091001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-config-data\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.091052 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbrhf\" (UniqueName: \"kubernetes.io/projected/ad627578-31c9-4d00-88a8-4dae148f1ae5-kube-api-access-nbrhf\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.091136 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.091178 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-scripts\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.097299 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.102154 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-config-data\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.113961 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:00 crc kubenswrapper[4760]: W0123 18:23:00.138710 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b1d0469_98c9_4d87_af6c_3f7057823848.slice/crio-436cff19c2c755054be4287a1cacc3a0072a3854b5446ee28542a80632ad01d6 WatchSource:0}: Error finding container 436cff19c2c755054be4287a1cacc3a0072a3854b5446ee28542a80632ad01d6: Status 404 returned error can't find the container with id 436cff19c2c755054be4287a1cacc3a0072a3854b5446ee28542a80632ad01d6 Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.140200 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-scripts\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.149188 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbrhf\" (UniqueName: \"kubernetes.io/projected/ad627578-31c9-4d00-88a8-4dae148f1ae5-kube-api-access-nbrhf\") pod \"nova-cell1-conductor-db-sync-4fvwn\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.164729 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.248099 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.265468 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.305896 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.325987 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2hx46"] Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.697451 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4fvwn"] Jan 23 18:23:00 crc kubenswrapper[4760]: W0123 18:23:00.700110 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad627578_31c9_4d00_88a8_4dae148f1ae5.slice/crio-dd77f7b8aed09b7f69685480e1863c5e07c5cbb4c92bbcef0e5a9007899e18fb WatchSource:0}: Error finding container dd77f7b8aed09b7f69685480e1863c5e07c5cbb4c92bbcef0e5a9007899e18fb: Status 404 returned error can't find the container with id dd77f7b8aed09b7f69685480e1863c5e07c5cbb4c92bbcef0e5a9007899e18fb Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.958266 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8cbd4dc9-434e-44e4-b584-3644467dded9","Type":"ContainerStarted","Data":"8b7f3784aa4fad3ebbc318a5a3643294b32388d9ac6a114757a5919eccff5446"} Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.960255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4fvwn" event={"ID":"ad627578-31c9-4d00-88a8-4dae148f1ae5","Type":"ContainerStarted","Data":"dd2feac632a228c0df974e1063166cb08920d555a1e3ac5939ab742e36a0251b"} Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.960283 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4fvwn" event={"ID":"ad627578-31c9-4d00-88a8-4dae148f1ae5","Type":"ContainerStarted","Data":"dd77f7b8aed09b7f69685480e1863c5e07c5cbb4c92bbcef0e5a9007899e18fb"} Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.962563 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a44844-5883-4db5-bd19-051e15f7c1a6","Type":"ContainerStarted","Data":"c002c0a8fc7bbd75d081a55be28a701a47ecf3a7e42aaa9a7f22cadc199b6fec"} Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.964826 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9b1d0469-98c9-4d87-af6c-3f7057823848","Type":"ContainerStarted","Data":"436cff19c2c755054be4287a1cacc3a0072a3854b5446ee28542a80632ad01d6"} Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.966783 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9","Type":"ContainerStarted","Data":"42cf89b86be99a432709b1be660c8ed489fff1709f3f9837905040aa5aa45870"} Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.969616 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rh87v" event={"ID":"219398d6-2967-4d15-b78a-3ee0165aff71","Type":"ContainerStarted","Data":"842d6fd81fe30f309e3a1a58601c9ea55d4eb0d41d68e6ecc2d91710b2419583"} Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.971832 4760 generic.go:334] "Generic (PLEG): container finished" podID="2623ed5b-d192-4949-af6b-86b2250772da" containerID="d9f0dd2c8e1616599e80c3d25c4de43a6563484c008329d8f247eb3765a43945" exitCode=0 Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.971873 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" event={"ID":"2623ed5b-d192-4949-af6b-86b2250772da","Type":"ContainerDied","Data":"d9f0dd2c8e1616599e80c3d25c4de43a6563484c008329d8f247eb3765a43945"} Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.971891 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" event={"ID":"2623ed5b-d192-4949-af6b-86b2250772da","Type":"ContainerStarted","Data":"36fdd4cd8dfd2e8e47dd9090447b40f0cfa41e3a523a3aaf4bc4f3a5ddcd2745"} Jan 23 18:23:00 crc kubenswrapper[4760]: I0123 18:23:00.982043 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4fvwn" podStartSLOduration=1.982024484 podStartE2EDuration="1.982024484s" podCreationTimestamp="2026-01-23 18:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:00.975945653 +0000 UTC m=+1323.978403606" watchObservedRunningTime="2026-01-23 18:23:00.982024484 +0000 UTC m=+1323.984482417" Jan 23 18:23:01 crc kubenswrapper[4760]: I0123 18:23:01.021086 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rh87v" podStartSLOduration=3.021068165 podStartE2EDuration="3.021068165s" podCreationTimestamp="2026-01-23 18:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:01.016648858 +0000 UTC m=+1324.019106791" watchObservedRunningTime="2026-01-23 18:23:01.021068165 +0000 UTC m=+1324.023526108" Jan 23 18:23:01 crc kubenswrapper[4760]: I0123 18:23:01.983612 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" event={"ID":"2623ed5b-d192-4949-af6b-86b2250772da","Type":"ContainerStarted","Data":"5ec79fda168453a382443eef65ebb4564b11c84026c33b2d6f23b9fbdbcec092"} Jan 23 18:23:01 crc kubenswrapper[4760]: I0123 18:23:01.984094 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:23:02 crc kubenswrapper[4760]: I0123 18:23:02.014897 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" podStartSLOduration=3.014874616 podStartE2EDuration="3.014874616s" podCreationTimestamp="2026-01-23 18:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:02.006431841 +0000 UTC m=+1325.008889774" watchObservedRunningTime="2026-01-23 18:23:02.014874616 +0000 UTC m=+1325.017332549" Jan 23 18:23:02 crc kubenswrapper[4760]: I0123 18:23:02.806964 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:23:02 crc kubenswrapper[4760]: I0123 18:23:02.821318 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.002070 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8cbd4dc9-434e-44e4-b584-3644467dded9","Type":"ContainerStarted","Data":"783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7"} Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.002156 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="8cbd4dc9-434e-44e4-b584-3644467dded9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7" gracePeriod=30 Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.006174 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a44844-5883-4db5-bd19-051e15f7c1a6","Type":"ContainerStarted","Data":"082d8ff38244a12e1f0f5c23fd2682351b562274ebdff40ca38711a016291120"} Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.006224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a44844-5883-4db5-bd19-051e15f7c1a6","Type":"ContainerStarted","Data":"20a97125458e772f41ccddba41e211d07ff3e5cfe1d050be5496af5c0b394de1"} Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.009390 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9b1d0469-98c9-4d87-af6c-3f7057823848","Type":"ContainerStarted","Data":"170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951"} Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.009452 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9b1d0469-98c9-4d87-af6c-3f7057823848","Type":"ContainerStarted","Data":"bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251"} Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.009538 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9b1d0469-98c9-4d87-af6c-3f7057823848" containerName="nova-metadata-log" containerID="cri-o://bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251" gracePeriod=30 Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.009577 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9b1d0469-98c9-4d87-af6c-3f7057823848" containerName="nova-metadata-metadata" containerID="cri-o://170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951" gracePeriod=30 Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.015128 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9","Type":"ContainerStarted","Data":"ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88"} Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.036606 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.273239074 podStartE2EDuration="5.036583296s" podCreationTimestamp="2026-01-23 18:22:59 +0000 UTC" firstStartedPulling="2026-01-23 18:23:00.326175089 +0000 UTC m=+1323.328633022" lastFinishedPulling="2026-01-23 18:23:03.089519321 +0000 UTC m=+1326.091977244" observedRunningTime="2026-01-23 18:23:04.018149235 +0000 UTC m=+1327.020607178" watchObservedRunningTime="2026-01-23 18:23:04.036583296 +0000 UTC m=+1327.039041249" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.042016 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.100364569 podStartE2EDuration="5.041996581s" podCreationTimestamp="2026-01-23 18:22:59 +0000 UTC" firstStartedPulling="2026-01-23 18:23:00.154584877 +0000 UTC m=+1323.157042810" lastFinishedPulling="2026-01-23 18:23:03.096216889 +0000 UTC m=+1326.098674822" observedRunningTime="2026-01-23 18:23:04.036899485 +0000 UTC m=+1327.039357418" watchObservedRunningTime="2026-01-23 18:23:04.041996581 +0000 UTC m=+1327.044454524" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.070766 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.163901761 podStartE2EDuration="5.070747927s" podCreationTimestamp="2026-01-23 18:22:59 +0000 UTC" firstStartedPulling="2026-01-23 18:23:00.183932098 +0000 UTC m=+1323.186390031" lastFinishedPulling="2026-01-23 18:23:03.090778264 +0000 UTC m=+1326.093236197" observedRunningTime="2026-01-23 18:23:04.056197379 +0000 UTC m=+1327.058655312" watchObservedRunningTime="2026-01-23 18:23:04.070747927 +0000 UTC m=+1327.073205860" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.075608 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.249432181 podStartE2EDuration="5.075590676s" podCreationTimestamp="2026-01-23 18:22:59 +0000 UTC" firstStartedPulling="2026-01-23 18:23:00.264889236 +0000 UTC m=+1323.267347169" lastFinishedPulling="2026-01-23 18:23:03.091047711 +0000 UTC m=+1326.093505664" observedRunningTime="2026-01-23 18:23:04.072074231 +0000 UTC m=+1327.074532164" watchObservedRunningTime="2026-01-23 18:23:04.075590676 +0000 UTC m=+1327.078048609" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.498226 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.510345 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.510379 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.574425 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.683741 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkxlj\" (UniqueName: \"kubernetes.io/projected/9b1d0469-98c9-4d87-af6c-3f7057823848-kube-api-access-mkxlj\") pod \"9b1d0469-98c9-4d87-af6c-3f7057823848\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.683896 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-config-data\") pod \"9b1d0469-98c9-4d87-af6c-3f7057823848\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.683998 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b1d0469-98c9-4d87-af6c-3f7057823848-logs\") pod \"9b1d0469-98c9-4d87-af6c-3f7057823848\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.684025 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-combined-ca-bundle\") pod \"9b1d0469-98c9-4d87-af6c-3f7057823848\" (UID: \"9b1d0469-98c9-4d87-af6c-3f7057823848\") " Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.684366 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.684710 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b1d0469-98c9-4d87-af6c-3f7057823848-logs" (OuterVolumeSpecName: "logs") pod "9b1d0469-98c9-4d87-af6c-3f7057823848" (UID: "9b1d0469-98c9-4d87-af6c-3f7057823848"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.690015 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b1d0469-98c9-4d87-af6c-3f7057823848-kube-api-access-mkxlj" (OuterVolumeSpecName: "kube-api-access-mkxlj") pod "9b1d0469-98c9-4d87-af6c-3f7057823848" (UID: "9b1d0469-98c9-4d87-af6c-3f7057823848"). InnerVolumeSpecName "kube-api-access-mkxlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.731820 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b1d0469-98c9-4d87-af6c-3f7057823848" (UID: "9b1d0469-98c9-4d87-af6c-3f7057823848"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.734937 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-config-data" (OuterVolumeSpecName: "config-data") pod "9b1d0469-98c9-4d87-af6c-3f7057823848" (UID: "9b1d0469-98c9-4d87-af6c-3f7057823848"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.786712 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b1d0469-98c9-4d87-af6c-3f7057823848-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.786818 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.786843 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkxlj\" (UniqueName: \"kubernetes.io/projected/9b1d0469-98c9-4d87-af6c-3f7057823848-kube-api-access-mkxlj\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:04 crc kubenswrapper[4760]: I0123 18:23:04.786864 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1d0469-98c9-4d87-af6c-3f7057823848-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.032949 4760 generic.go:334] "Generic (PLEG): container finished" podID="9b1d0469-98c9-4d87-af6c-3f7057823848" containerID="170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951" exitCode=0 Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.033016 4760 generic.go:334] "Generic (PLEG): container finished" podID="9b1d0469-98c9-4d87-af6c-3f7057823848" containerID="bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251" exitCode=143 Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.034272 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9b1d0469-98c9-4d87-af6c-3f7057823848","Type":"ContainerDied","Data":"170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951"} Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.034326 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9b1d0469-98c9-4d87-af6c-3f7057823848","Type":"ContainerDied","Data":"bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251"} Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.034358 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9b1d0469-98c9-4d87-af6c-3f7057823848","Type":"ContainerDied","Data":"436cff19c2c755054be4287a1cacc3a0072a3854b5446ee28542a80632ad01d6"} Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.034398 4760 scope.go:117] "RemoveContainer" containerID="170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.035176 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.076943 4760 scope.go:117] "RemoveContainer" containerID="bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.089135 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.124010 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.129201 4760 scope.go:117] "RemoveContainer" containerID="170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951" Jan 23 18:23:05 crc kubenswrapper[4760]: E0123 18:23:05.129850 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951\": container with ID starting with 170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951 not found: ID does not exist" containerID="170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.129909 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951"} err="failed to get container status \"170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951\": rpc error: code = NotFound desc = could not find container \"170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951\": container with ID starting with 170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951 not found: ID does not exist" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.129962 4760 scope.go:117] "RemoveContainer" containerID="bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251" Jan 23 18:23:05 crc kubenswrapper[4760]: E0123 18:23:05.130425 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251\": container with ID starting with bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251 not found: ID does not exist" containerID="bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.130467 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251"} err="failed to get container status \"bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251\": rpc error: code = NotFound desc = could not find container \"bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251\": container with ID starting with bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251 not found: ID does not exist" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.130487 4760 scope.go:117] "RemoveContainer" containerID="170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.130850 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951"} err="failed to get container status \"170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951\": rpc error: code = NotFound desc = could not find container \"170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951\": container with ID starting with 170d6e14fc9a662e8489d8711d4eacaff5fb3883d772e73826996c87c1771951 not found: ID does not exist" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.130882 4760 scope.go:117] "RemoveContainer" containerID="bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.135612 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251"} err="failed to get container status \"bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251\": rpc error: code = NotFound desc = could not find container \"bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251\": container with ID starting with bfc4c56b834769edefc4a154279bbd586674df8e5d17ce48a51dd2831bad0251 not found: ID does not exist" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.161180 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:05 crc kubenswrapper[4760]: E0123 18:23:05.161637 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1d0469-98c9-4d87-af6c-3f7057823848" containerName="nova-metadata-log" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.161659 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1d0469-98c9-4d87-af6c-3f7057823848" containerName="nova-metadata-log" Jan 23 18:23:05 crc kubenswrapper[4760]: E0123 18:23:05.161682 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1d0469-98c9-4d87-af6c-3f7057823848" containerName="nova-metadata-metadata" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.161689 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1d0469-98c9-4d87-af6c-3f7057823848" containerName="nova-metadata-metadata" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.161890 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b1d0469-98c9-4d87-af6c-3f7057823848" containerName="nova-metadata-metadata" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.161921 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b1d0469-98c9-4d87-af6c-3f7057823848" containerName="nova-metadata-log" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.163054 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.165111 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.165257 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.169338 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.295560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9f49\" (UniqueName: \"kubernetes.io/projected/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-kube-api-access-v9f49\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.296002 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-config-data\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.296378 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.296767 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.297147 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-logs\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.399314 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.399457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-logs\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.399543 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9f49\" (UniqueName: \"kubernetes.io/projected/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-kube-api-access-v9f49\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.399607 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-config-data\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.399654 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.399781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-logs\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.403715 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.406801 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.417735 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-config-data\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.421343 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9f49\" (UniqueName: \"kubernetes.io/projected/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-kube-api-access-v9f49\") pod \"nova-metadata-0\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.483044 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.608855 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b1d0469-98c9-4d87-af6c-3f7057823848" path="/var/lib/kubelet/pods/9b1d0469-98c9-4d87-af6c-3f7057823848/volumes" Jan 23 18:23:05 crc kubenswrapper[4760]: W0123 18:23:05.944525 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8f927b0_c011_4cd7_9aa3_23da2c8e7e43.slice/crio-156d4cd0de46e118ff2afcb087cc8f1a9366d7164f3b86e5bbff18c37b385cea WatchSource:0}: Error finding container 156d4cd0de46e118ff2afcb087cc8f1a9366d7164f3b86e5bbff18c37b385cea: Status 404 returned error can't find the container with id 156d4cd0de46e118ff2afcb087cc8f1a9366d7164f3b86e5bbff18c37b385cea Jan 23 18:23:05 crc kubenswrapper[4760]: I0123 18:23:05.950380 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:06 crc kubenswrapper[4760]: I0123 18:23:06.045085 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43","Type":"ContainerStarted","Data":"156d4cd0de46e118ff2afcb087cc8f1a9366d7164f3b86e5bbff18c37b385cea"} Jan 23 18:23:07 crc kubenswrapper[4760]: I0123 18:23:07.057012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43","Type":"ContainerStarted","Data":"9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417"} Jan 23 18:23:07 crc kubenswrapper[4760]: I0123 18:23:07.058516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43","Type":"ContainerStarted","Data":"ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af"} Jan 23 18:23:07 crc kubenswrapper[4760]: I0123 18:23:07.083347 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.083320239 podStartE2EDuration="2.083320239s" podCreationTimestamp="2026-01-23 18:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:07.076153458 +0000 UTC m=+1330.078611381" watchObservedRunningTime="2026-01-23 18:23:07.083320239 +0000 UTC m=+1330.085778192" Jan 23 18:23:08 crc kubenswrapper[4760]: I0123 18:23:08.067664 4760 generic.go:334] "Generic (PLEG): container finished" podID="219398d6-2967-4d15-b78a-3ee0165aff71" containerID="842d6fd81fe30f309e3a1a58601c9ea55d4eb0d41d68e6ecc2d91710b2419583" exitCode=0 Jan 23 18:23:08 crc kubenswrapper[4760]: I0123 18:23:08.067702 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rh87v" event={"ID":"219398d6-2967-4d15-b78a-3ee0165aff71","Type":"ContainerDied","Data":"842d6fd81fe30f309e3a1a58601c9ea55d4eb0d41d68e6ecc2d91710b2419583"} Jan 23 18:23:08 crc kubenswrapper[4760]: I0123 18:23:08.109531 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.078480 4760 generic.go:334] "Generic (PLEG): container finished" podID="ad627578-31c9-4d00-88a8-4dae148f1ae5" containerID="dd2feac632a228c0df974e1063166cb08920d555a1e3ac5939ab742e36a0251b" exitCode=0 Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.078579 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4fvwn" event={"ID":"ad627578-31c9-4d00-88a8-4dae148f1ae5","Type":"ContainerDied","Data":"dd2feac632a228c0df974e1063166cb08920d555a1e3ac5939ab742e36a0251b"} Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.477260 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.477679 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.479507 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.498612 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.533640 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.595277 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-scripts\") pod \"219398d6-2967-4d15-b78a-3ee0165aff71\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.595370 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-config-data\") pod \"219398d6-2967-4d15-b78a-3ee0165aff71\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.595576 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-combined-ca-bundle\") pod \"219398d6-2967-4d15-b78a-3ee0165aff71\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.595664 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68jmg\" (UniqueName: \"kubernetes.io/projected/219398d6-2967-4d15-b78a-3ee0165aff71-kube-api-access-68jmg\") pod \"219398d6-2967-4d15-b78a-3ee0165aff71\" (UID: \"219398d6-2967-4d15-b78a-3ee0165aff71\") " Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.603529 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219398d6-2967-4d15-b78a-3ee0165aff71-kube-api-access-68jmg" (OuterVolumeSpecName: "kube-api-access-68jmg") pod "219398d6-2967-4d15-b78a-3ee0165aff71" (UID: "219398d6-2967-4d15-b78a-3ee0165aff71"). InnerVolumeSpecName "kube-api-access-68jmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.627895 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-config-data" (OuterVolumeSpecName: "config-data") pod "219398d6-2967-4d15-b78a-3ee0165aff71" (UID: "219398d6-2967-4d15-b78a-3ee0165aff71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.630256 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-scripts" (OuterVolumeSpecName: "scripts") pod "219398d6-2967-4d15-b78a-3ee0165aff71" (UID: "219398d6-2967-4d15-b78a-3ee0165aff71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.631104 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "219398d6-2967-4d15-b78a-3ee0165aff71" (UID: "219398d6-2967-4d15-b78a-3ee0165aff71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.696191 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.698106 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.698216 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.698276 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219398d6-2967-4d15-b78a-3ee0165aff71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.698344 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68jmg\" (UniqueName: \"kubernetes.io/projected/219398d6-2967-4d15-b78a-3ee0165aff71-kube-api-access-68jmg\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.750297 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-pd7mq"] Jan 23 18:23:09 crc kubenswrapper[4760]: I0123 18:23:09.750767 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" podUID="7c36e36b-cebd-42fa-83ed-3f5ac012865f" containerName="dnsmasq-dns" containerID="cri-o://8f478a31bac11c6cb22bb00cc51de5825c9d600ab76d78645f526ce0cb72440e" gracePeriod=10 Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.100279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rh87v" event={"ID":"219398d6-2967-4d15-b78a-3ee0165aff71","Type":"ContainerDied","Data":"c63618f6b5057fbe15234d101d95cad13d6250a26840a768484b4490c5ea6b7b"} Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.100567 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c63618f6b5057fbe15234d101d95cad13d6250a26840a768484b4490c5ea6b7b" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.100531 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rh87v" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.103492 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c36e36b-cebd-42fa-83ed-3f5ac012865f" containerID="8f478a31bac11c6cb22bb00cc51de5825c9d600ab76d78645f526ce0cb72440e" exitCode=0 Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.103502 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" event={"ID":"7c36e36b-cebd-42fa-83ed-3f5ac012865f","Type":"ContainerDied","Data":"8f478a31bac11c6cb22bb00cc51de5825c9d600ab76d78645f526ce0cb72440e"} Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.144555 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.242044 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.242293 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerName="nova-api-log" containerID="cri-o://20a97125458e772f41ccddba41e211d07ff3e5cfe1d050be5496af5c0b394de1" gracePeriod=30 Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.242495 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerName="nova-api-api" containerID="cri-o://082d8ff38244a12e1f0f5c23fd2682351b562274ebdff40ca38711a016291120" gracePeriod=30 Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.251607 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": EOF" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.253312 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": EOF" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.254285 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.259747 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.260143 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" containerName="nova-metadata-log" containerID="cri-o://ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af" gracePeriod=30 Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.260290 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" containerName="nova-metadata-metadata" containerID="cri-o://9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417" gracePeriod=30 Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.315365 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-nb\") pod \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.315481 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-sb\") pod \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.315552 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-config\") pod \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.315701 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grtgl\" (UniqueName: \"kubernetes.io/projected/7c36e36b-cebd-42fa-83ed-3f5ac012865f-kube-api-access-grtgl\") pod \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.315776 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-dns-svc\") pod \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\" (UID: \"7c36e36b-cebd-42fa-83ed-3f5ac012865f\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.326611 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c36e36b-cebd-42fa-83ed-3f5ac012865f-kube-api-access-grtgl" (OuterVolumeSpecName: "kube-api-access-grtgl") pod "7c36e36b-cebd-42fa-83ed-3f5ac012865f" (UID: "7c36e36b-cebd-42fa-83ed-3f5ac012865f"). InnerVolumeSpecName "kube-api-access-grtgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.418074 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grtgl\" (UniqueName: \"kubernetes.io/projected/7c36e36b-cebd-42fa-83ed-3f5ac012865f-kube-api-access-grtgl\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.434088 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-config" (OuterVolumeSpecName: "config") pod "7c36e36b-cebd-42fa-83ed-3f5ac012865f" (UID: "7c36e36b-cebd-42fa-83ed-3f5ac012865f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.463498 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c36e36b-cebd-42fa-83ed-3f5ac012865f" (UID: "7c36e36b-cebd-42fa-83ed-3f5ac012865f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.479741 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c36e36b-cebd-42fa-83ed-3f5ac012865f" (UID: "7c36e36b-cebd-42fa-83ed-3f5ac012865f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.483305 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.483549 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.515947 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c36e36b-cebd-42fa-83ed-3f5ac012865f" (UID: "7c36e36b-cebd-42fa-83ed-3f5ac012865f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.519195 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.519224 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.519234 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.519242 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c36e36b-cebd-42fa-83ed-3f5ac012865f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.659103 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.722075 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-config-data\") pod \"ad627578-31c9-4d00-88a8-4dae148f1ae5\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.722162 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbrhf\" (UniqueName: \"kubernetes.io/projected/ad627578-31c9-4d00-88a8-4dae148f1ae5-kube-api-access-nbrhf\") pod \"ad627578-31c9-4d00-88a8-4dae148f1ae5\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.722711 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-scripts\") pod \"ad627578-31c9-4d00-88a8-4dae148f1ae5\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.722849 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-combined-ca-bundle\") pod \"ad627578-31c9-4d00-88a8-4dae148f1ae5\" (UID: \"ad627578-31c9-4d00-88a8-4dae148f1ae5\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.725464 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-scripts" (OuterVolumeSpecName: "scripts") pod "ad627578-31c9-4d00-88a8-4dae148f1ae5" (UID: "ad627578-31c9-4d00-88a8-4dae148f1ae5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.729692 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad627578-31c9-4d00-88a8-4dae148f1ae5-kube-api-access-nbrhf" (OuterVolumeSpecName: "kube-api-access-nbrhf") pod "ad627578-31c9-4d00-88a8-4dae148f1ae5" (UID: "ad627578-31c9-4d00-88a8-4dae148f1ae5"). InnerVolumeSpecName "kube-api-access-nbrhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.756309 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad627578-31c9-4d00-88a8-4dae148f1ae5" (UID: "ad627578-31c9-4d00-88a8-4dae148f1ae5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.760564 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.764274 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-config-data" (OuterVolumeSpecName: "config-data") pod "ad627578-31c9-4d00-88a8-4dae148f1ae5" (UID: "ad627578-31c9-4d00-88a8-4dae148f1ae5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.784524 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.825040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-config-data\") pod \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.825090 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9f49\" (UniqueName: \"kubernetes.io/projected/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-kube-api-access-v9f49\") pod \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.825171 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-nova-metadata-tls-certs\") pod \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.825229 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-logs\") pod \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.825265 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-combined-ca-bundle\") pod \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\" (UID: \"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43\") " Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.825637 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.825667 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbrhf\" (UniqueName: \"kubernetes.io/projected/ad627578-31c9-4d00-88a8-4dae148f1ae5-kube-api-access-nbrhf\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.825682 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.825694 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad627578-31c9-4d00-88a8-4dae148f1ae5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.827997 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-logs" (OuterVolumeSpecName: "logs") pod "f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" (UID: "f8f927b0-c011-4cd7-9aa3-23da2c8e7e43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.829091 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-kube-api-access-v9f49" (OuterVolumeSpecName: "kube-api-access-v9f49") pod "f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" (UID: "f8f927b0-c011-4cd7-9aa3-23da2c8e7e43"). InnerVolumeSpecName "kube-api-access-v9f49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.851583 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-config-data" (OuterVolumeSpecName: "config-data") pod "f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" (UID: "f8f927b0-c011-4cd7-9aa3-23da2c8e7e43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.878522 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" (UID: "f8f927b0-c011-4cd7-9aa3-23da2c8e7e43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.906919 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" (UID: "f8f927b0-c011-4cd7-9aa3-23da2c8e7e43"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.928656 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.928696 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.928708 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.928718 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9f49\" (UniqueName: \"kubernetes.io/projected/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-kube-api-access-v9f49\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:10 crc kubenswrapper[4760]: I0123 18:23:10.928728 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.067262 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.067536 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f98838a6-6ae7-4d8a-af63-4ad4cf693690" containerName="kube-state-metrics" containerID="cri-o://77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3" gracePeriod=30 Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.113759 4760 generic.go:334] "Generic (PLEG): container finished" podID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" containerID="9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417" exitCode=0 Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.113819 4760 generic.go:334] "Generic (PLEG): container finished" podID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" containerID="ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af" exitCode=143 Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.113880 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.113930 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43","Type":"ContainerDied","Data":"9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417"} Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.114001 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43","Type":"ContainerDied","Data":"ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af"} Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.114012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8f927b0-c011-4cd7-9aa3-23da2c8e7e43","Type":"ContainerDied","Data":"156d4cd0de46e118ff2afcb087cc8f1a9366d7164f3b86e5bbff18c37b385cea"} Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.114028 4760 scope.go:117] "RemoveContainer" containerID="9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.115716 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4fvwn" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.116082 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4fvwn" event={"ID":"ad627578-31c9-4d00-88a8-4dae148f1ae5","Type":"ContainerDied","Data":"dd77f7b8aed09b7f69685480e1863c5e07c5cbb4c92bbcef0e5a9007899e18fb"} Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.116218 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd77f7b8aed09b7f69685480e1863c5e07c5cbb4c92bbcef0e5a9007899e18fb" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.121101 4760 generic.go:334] "Generic (PLEG): container finished" podID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerID="20a97125458e772f41ccddba41e211d07ff3e5cfe1d050be5496af5c0b394de1" exitCode=143 Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.121169 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a44844-5883-4db5-bd19-051e15f7c1a6","Type":"ContainerDied","Data":"20a97125458e772f41ccddba41e211d07ff3e5cfe1d050be5496af5c0b394de1"} Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.124250 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.124480 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-pd7mq" event={"ID":"7c36e36b-cebd-42fa-83ed-3f5ac012865f","Type":"ContainerDied","Data":"2970c94fb4ddf7106eac5d76c47a2c2dca0a252e0948c2987fe60d98375ff92f"} Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.156890 4760 scope.go:117] "RemoveContainer" containerID="ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.170785 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.184874 4760 scope.go:117] "RemoveContainer" containerID="9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417" Jan 23 18:23:11 crc kubenswrapper[4760]: E0123 18:23:11.185868 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417\": container with ID starting with 9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417 not found: ID does not exist" containerID="9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.185902 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417"} err="failed to get container status \"9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417\": rpc error: code = NotFound desc = could not find container \"9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417\": container with ID starting with 9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417 not found: ID does not exist" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.185932 4760 scope.go:117] "RemoveContainer" containerID="ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af" Jan 23 18:23:11 crc kubenswrapper[4760]: E0123 18:23:11.187579 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af\": container with ID starting with ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af not found: ID does not exist" containerID="ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.187607 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af"} err="failed to get container status \"ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af\": rpc error: code = NotFound desc = could not find container \"ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af\": container with ID starting with ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af not found: ID does not exist" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.187622 4760 scope.go:117] "RemoveContainer" containerID="9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.191688 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417"} err="failed to get container status \"9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417\": rpc error: code = NotFound desc = could not find container \"9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417\": container with ID starting with 9ca4045a042fd04db9ae8887f520d1a3c89a8226eaec0323e55d784e1a8d7417 not found: ID does not exist" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.191720 4760 scope.go:117] "RemoveContainer" containerID="ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.192532 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.192670 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af"} err="failed to get container status \"ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af\": rpc error: code = NotFound desc = could not find container \"ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af\": container with ID starting with ff5c6d7c7df074133e2e1a6bf426fe51439f94a92258225ff444ed47b6cd28af not found: ID does not exist" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.192705 4760 scope.go:117] "RemoveContainer" containerID="8f478a31bac11c6cb22bb00cc51de5825c9d600ab76d78645f526ce0cb72440e" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.211747 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 18:23:11 crc kubenswrapper[4760]: E0123 18:23:11.212145 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" containerName="nova-metadata-metadata" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212161 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" containerName="nova-metadata-metadata" Jan 23 18:23:11 crc kubenswrapper[4760]: E0123 18:23:11.212175 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c36e36b-cebd-42fa-83ed-3f5ac012865f" containerName="init" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212181 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c36e36b-cebd-42fa-83ed-3f5ac012865f" containerName="init" Jan 23 18:23:11 crc kubenswrapper[4760]: E0123 18:23:11.212190 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c36e36b-cebd-42fa-83ed-3f5ac012865f" containerName="dnsmasq-dns" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212196 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c36e36b-cebd-42fa-83ed-3f5ac012865f" containerName="dnsmasq-dns" Jan 23 18:23:11 crc kubenswrapper[4760]: E0123 18:23:11.212212 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad627578-31c9-4d00-88a8-4dae148f1ae5" containerName="nova-cell1-conductor-db-sync" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212218 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad627578-31c9-4d00-88a8-4dae148f1ae5" containerName="nova-cell1-conductor-db-sync" Jan 23 18:23:11 crc kubenswrapper[4760]: E0123 18:23:11.212230 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219398d6-2967-4d15-b78a-3ee0165aff71" containerName="nova-manage" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212236 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="219398d6-2967-4d15-b78a-3ee0165aff71" containerName="nova-manage" Jan 23 18:23:11 crc kubenswrapper[4760]: E0123 18:23:11.212262 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" containerName="nova-metadata-log" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212268 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" containerName="nova-metadata-log" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212499 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="219398d6-2967-4d15-b78a-3ee0165aff71" containerName="nova-manage" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212519 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c36e36b-cebd-42fa-83ed-3f5ac012865f" containerName="dnsmasq-dns" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212526 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" containerName="nova-metadata-metadata" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212544 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad627578-31c9-4d00-88a8-4dae148f1ae5" containerName="nova-cell1-conductor-db-sync" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.212577 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" containerName="nova-metadata-log" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.213173 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.220359 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.220502 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.221884 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.225701 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.225931 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.233336 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-pd7mq"] Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.233452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-logs\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.233500 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678d1978-a829-44dd-9030-3026a9f170b0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"678d1978-a829-44dd-9030-3026a9f170b0\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.233528 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-config-data\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.233544 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd2zk\" (UniqueName: \"kubernetes.io/projected/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-kube-api-access-hd2zk\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.233563 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6b7v\" (UniqueName: \"kubernetes.io/projected/678d1978-a829-44dd-9030-3026a9f170b0-kube-api-access-t6b7v\") pod \"nova-cell1-conductor-0\" (UID: \"678d1978-a829-44dd-9030-3026a9f170b0\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.233650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.233710 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.233896 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678d1978-a829-44dd-9030-3026a9f170b0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"678d1978-a829-44dd-9030-3026a9f170b0\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.242824 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-pd7mq"] Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.256173 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.263330 4760 scope.go:117] "RemoveContainer" containerID="ddaa1b2a77f1f408c8c76365ed1853b75ae0add8cc62f9ee1a4b1d75568fb9dc" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.269651 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.356134 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.356511 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.356569 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678d1978-a829-44dd-9030-3026a9f170b0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"678d1978-a829-44dd-9030-3026a9f170b0\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.356636 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-logs\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.356672 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678d1978-a829-44dd-9030-3026a9f170b0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"678d1978-a829-44dd-9030-3026a9f170b0\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.356696 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-config-data\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.356712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd2zk\" (UniqueName: \"kubernetes.io/projected/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-kube-api-access-hd2zk\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.356733 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6b7v\" (UniqueName: \"kubernetes.io/projected/678d1978-a829-44dd-9030-3026a9f170b0-kube-api-access-t6b7v\") pod \"nova-cell1-conductor-0\" (UID: \"678d1978-a829-44dd-9030-3026a9f170b0\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.357379 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-logs\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.360743 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.360838 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678d1978-a829-44dd-9030-3026a9f170b0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"678d1978-a829-44dd-9030-3026a9f170b0\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.374832 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-config-data\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.375802 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678d1978-a829-44dd-9030-3026a9f170b0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"678d1978-a829-44dd-9030-3026a9f170b0\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.379808 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.386882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd2zk\" (UniqueName: \"kubernetes.io/projected/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-kube-api-access-hd2zk\") pod \"nova-metadata-0\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.397150 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6b7v\" (UniqueName: \"kubernetes.io/projected/678d1978-a829-44dd-9030-3026a9f170b0-kube-api-access-t6b7v\") pod \"nova-cell1-conductor-0\" (UID: \"678d1978-a829-44dd-9030-3026a9f170b0\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.536762 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.557293 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.609578 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c36e36b-cebd-42fa-83ed-3f5ac012865f" path="/var/lib/kubelet/pods/7c36e36b-cebd-42fa-83ed-3f5ac012865f/volumes" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.614746 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f927b0-c011-4cd7-9aa3-23da2c8e7e43" path="/var/lib/kubelet/pods/f8f927b0-c011-4cd7-9aa3-23da2c8e7e43/volumes" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.665766 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.763535 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqswj\" (UniqueName: \"kubernetes.io/projected/f98838a6-6ae7-4d8a-af63-4ad4cf693690-kube-api-access-nqswj\") pod \"f98838a6-6ae7-4d8a-af63-4ad4cf693690\" (UID: \"f98838a6-6ae7-4d8a-af63-4ad4cf693690\") " Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.769356 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98838a6-6ae7-4d8a-af63-4ad4cf693690-kube-api-access-nqswj" (OuterVolumeSpecName: "kube-api-access-nqswj") pod "f98838a6-6ae7-4d8a-af63-4ad4cf693690" (UID: "f98838a6-6ae7-4d8a-af63-4ad4cf693690"). InnerVolumeSpecName "kube-api-access-nqswj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:11 crc kubenswrapper[4760]: I0123 18:23:11.866016 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqswj\" (UniqueName: \"kubernetes.io/projected/f98838a6-6ae7-4d8a-af63-4ad4cf693690-kube-api-access-nqswj\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.024483 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 18:23:12 crc kubenswrapper[4760]: W0123 18:23:12.024748 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod678d1978_a829_44dd_9030_3026a9f170b0.slice/crio-53f7ae81c8baf87be15a417cb71baf2f408a85b62778b6112b6973098f6443c4 WatchSource:0}: Error finding container 53f7ae81c8baf87be15a417cb71baf2f408a85b62778b6112b6973098f6443c4: Status 404 returned error can't find the container with id 53f7ae81c8baf87be15a417cb71baf2f408a85b62778b6112b6973098f6443c4 Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.098048 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:12 crc kubenswrapper[4760]: W0123 18:23:12.111290 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cddb8a6_4311_4f24_84b9_dac36bba5fe2.slice/crio-5efb87e8e3196c4e20db2b3824deb32a0a518dcf60b9c6a6c3ca31e5d61a9b12 WatchSource:0}: Error finding container 5efb87e8e3196c4e20db2b3824deb32a0a518dcf60b9c6a6c3ca31e5d61a9b12: Status 404 returned error can't find the container with id 5efb87e8e3196c4e20db2b3824deb32a0a518dcf60b9c6a6c3ca31e5d61a9b12 Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.134756 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5cddb8a6-4311-4f24-84b9-dac36bba5fe2","Type":"ContainerStarted","Data":"5efb87e8e3196c4e20db2b3824deb32a0a518dcf60b9c6a6c3ca31e5d61a9b12"} Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.137879 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"678d1978-a829-44dd-9030-3026a9f170b0","Type":"ContainerStarted","Data":"53f7ae81c8baf87be15a417cb71baf2f408a85b62778b6112b6973098f6443c4"} Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.143029 4760 generic.go:334] "Generic (PLEG): container finished" podID="f98838a6-6ae7-4d8a-af63-4ad4cf693690" containerID="77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3" exitCode=2 Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.143126 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f98838a6-6ae7-4d8a-af63-4ad4cf693690","Type":"ContainerDied","Data":"77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3"} Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.143185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f98838a6-6ae7-4d8a-af63-4ad4cf693690","Type":"ContainerDied","Data":"2c59fade446856d8f6d28613bcedc6237019c726fb95f70f7fb1a6d641798c88"} Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.143206 4760 scope.go:117] "RemoveContainer" containerID="77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.143140 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.150343 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="33ebc3dd-118f-40ff-8cb2-16ad4cb96db9" containerName="nova-scheduler-scheduler" containerID="cri-o://ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88" gracePeriod=30 Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.175603 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.175886 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="ceilometer-central-agent" containerID="cri-o://221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e" gracePeriod=30 Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.176302 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="proxy-httpd" containerID="cri-o://15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445" gracePeriod=30 Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.176349 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="sg-core" containerID="cri-o://4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5" gracePeriod=30 Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.176382 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="ceilometer-notification-agent" containerID="cri-o://94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3" gracePeriod=30 Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.188771 4760 scope.go:117] "RemoveContainer" containerID="77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3" Jan 23 18:23:12 crc kubenswrapper[4760]: E0123 18:23:12.190904 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3\": container with ID starting with 77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3 not found: ID does not exist" containerID="77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.190954 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3"} err="failed to get container status \"77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3\": rpc error: code = NotFound desc = could not find container \"77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3\": container with ID starting with 77efd0a8a8c9f0b1d60cc6d9a340ef3e19ce8038f2ea57ccea088d36dc026db3 not found: ID does not exist" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.214681 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.226844 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.237619 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:23:12 crc kubenswrapper[4760]: E0123 18:23:12.238200 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98838a6-6ae7-4d8a-af63-4ad4cf693690" containerName="kube-state-metrics" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.238215 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98838a6-6ae7-4d8a-af63-4ad4cf693690" containerName="kube-state-metrics" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.238470 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98838a6-6ae7-4d8a-af63-4ad4cf693690" containerName="kube-state-metrics" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.239045 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.240919 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.241484 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.274810 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.276530 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae37337-d347-4c76-ab83-43463ab30c29-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.276659 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2ae37337-d347-4c76-ab83-43463ab30c29-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.276771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldkmt\" (UniqueName: \"kubernetes.io/projected/2ae37337-d347-4c76-ab83-43463ab30c29-kube-api-access-ldkmt\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.277020 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae37337-d347-4c76-ab83-43463ab30c29-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.379201 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae37337-d347-4c76-ab83-43463ab30c29-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.379264 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2ae37337-d347-4c76-ab83-43463ab30c29-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.379310 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldkmt\" (UniqueName: \"kubernetes.io/projected/2ae37337-d347-4c76-ab83-43463ab30c29-kube-api-access-ldkmt\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.379380 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae37337-d347-4c76-ab83-43463ab30c29-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.384128 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae37337-d347-4c76-ab83-43463ab30c29-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.384934 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2ae37337-d347-4c76-ab83-43463ab30c29-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.386214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae37337-d347-4c76-ab83-43463ab30c29-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.403963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldkmt\" (UniqueName: \"kubernetes.io/projected/2ae37337-d347-4c76-ab83-43463ab30c29-kube-api-access-ldkmt\") pod \"kube-state-metrics-0\" (UID: \"2ae37337-d347-4c76-ab83-43463ab30c29\") " pod="openstack/kube-state-metrics-0" Jan 23 18:23:12 crc kubenswrapper[4760]: I0123 18:23:12.579845 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:23:13 crc kubenswrapper[4760]: I0123 18:23:13.033452 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:23:13 crc kubenswrapper[4760]: I0123 18:23:13.162808 4760 generic.go:334] "Generic (PLEG): container finished" podID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerID="4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5" exitCode=2 Jan 23 18:23:13 crc kubenswrapper[4760]: I0123 18:23:13.162882 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489","Type":"ContainerDied","Data":"4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5"} Jan 23 18:23:13 crc kubenswrapper[4760]: I0123 18:23:13.163954 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ae37337-d347-4c76-ab83-43463ab30c29","Type":"ContainerStarted","Data":"d41df69839612f3bd1001a806f2edb58abf4ba06a036b8011e5d5b41b86da3c2"} Jan 23 18:23:13 crc kubenswrapper[4760]: I0123 18:23:13.606974 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98838a6-6ae7-4d8a-af63-4ad4cf693690" path="/var/lib/kubelet/pods/f98838a6-6ae7-4d8a-af63-4ad4cf693690/volumes" Jan 23 18:23:14 crc kubenswrapper[4760]: I0123 18:23:14.215846 4760 generic.go:334] "Generic (PLEG): container finished" podID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerID="15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445" exitCode=0 Jan 23 18:23:14 crc kubenswrapper[4760]: I0123 18:23:14.216063 4760 generic.go:334] "Generic (PLEG): container finished" podID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerID="221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e" exitCode=0 Jan 23 18:23:14 crc kubenswrapper[4760]: I0123 18:23:14.215911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489","Type":"ContainerDied","Data":"15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445"} Jan 23 18:23:14 crc kubenswrapper[4760]: I0123 18:23:14.216127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489","Type":"ContainerDied","Data":"221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e"} Jan 23 18:23:14 crc kubenswrapper[4760]: I0123 18:23:14.217938 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5cddb8a6-4311-4f24-84b9-dac36bba5fe2","Type":"ContainerStarted","Data":"4163e0c4d5cf9828fc87b3239e0e9e0bff8edb612eca6869466e220b31fc3cf2"} Jan 23 18:23:14 crc kubenswrapper[4760]: I0123 18:23:14.220077 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"678d1978-a829-44dd-9030-3026a9f170b0","Type":"ContainerStarted","Data":"5d6e8a9ba601edf11b13e2016d09a348195fcf33435084268ec6458c07f7c2a9"} Jan 23 18:23:14 crc kubenswrapper[4760]: I0123 18:23:14.220262 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:14 crc kubenswrapper[4760]: I0123 18:23:14.238497 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.238474264 podStartE2EDuration="3.238474264s" podCreationTimestamp="2026-01-23 18:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:14.232722371 +0000 UTC m=+1337.235180304" watchObservedRunningTime="2026-01-23 18:23:14.238474264 +0000 UTC m=+1337.240932197" Jan 23 18:23:14 crc kubenswrapper[4760]: E0123 18:23:14.499879 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 18:23:14 crc kubenswrapper[4760]: E0123 18:23:14.502530 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 18:23:14 crc kubenswrapper[4760]: E0123 18:23:14.504036 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 18:23:14 crc kubenswrapper[4760]: E0123 18:23:14.504081 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="33ebc3dd-118f-40ff-8cb2-16ad4cb96db9" containerName="nova-scheduler-scheduler" Jan 23 18:23:14 crc kubenswrapper[4760]: I0123 18:23:14.946842 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.025445 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcmvd\" (UniqueName: \"kubernetes.io/projected/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-kube-api-access-fcmvd\") pod \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.025589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-config-data\") pod \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.025621 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-combined-ca-bundle\") pod \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\" (UID: \"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9\") " Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.033099 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-kube-api-access-fcmvd" (OuterVolumeSpecName: "kube-api-access-fcmvd") pod "33ebc3dd-118f-40ff-8cb2-16ad4cb96db9" (UID: "33ebc3dd-118f-40ff-8cb2-16ad4cb96db9"). InnerVolumeSpecName "kube-api-access-fcmvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.052304 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-config-data" (OuterVolumeSpecName: "config-data") pod "33ebc3dd-118f-40ff-8cb2-16ad4cb96db9" (UID: "33ebc3dd-118f-40ff-8cb2-16ad4cb96db9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.054773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33ebc3dd-118f-40ff-8cb2-16ad4cb96db9" (UID: "33ebc3dd-118f-40ff-8cb2-16ad4cb96db9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.128279 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcmvd\" (UniqueName: \"kubernetes.io/projected/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-kube-api-access-fcmvd\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.128311 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.128321 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.235057 4760 generic.go:334] "Generic (PLEG): container finished" podID="33ebc3dd-118f-40ff-8cb2-16ad4cb96db9" containerID="ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88" exitCode=0 Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.235134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9","Type":"ContainerDied","Data":"ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88"} Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.235182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"33ebc3dd-118f-40ff-8cb2-16ad4cb96db9","Type":"ContainerDied","Data":"42cf89b86be99a432709b1be660c8ed489fff1709f3f9837905040aa5aa45870"} Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.235201 4760 scope.go:117] "RemoveContainer" containerID="ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.235303 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.238260 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ae37337-d347-4c76-ab83-43463ab30c29","Type":"ContainerStarted","Data":"2e6f40636d004e15bdbc6dd92f7ae131d7453d0ba55919ec77c361dfae398a8c"} Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.238478 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.240528 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5cddb8a6-4311-4f24-84b9-dac36bba5fe2","Type":"ContainerStarted","Data":"88b837fd12ba1e81096628c654d99d4235f3a99d1ac638ff4859be86307cd623"} Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.258719 4760 scope.go:117] "RemoveContainer" containerID="ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88" Jan 23 18:23:15 crc kubenswrapper[4760]: E0123 18:23:15.259258 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88\": container with ID starting with ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88 not found: ID does not exist" containerID="ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.259298 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88"} err="failed to get container status \"ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88\": rpc error: code = NotFound desc = could not find container \"ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88\": container with ID starting with ec631c2fb9be7cb85f32ec635d3bd829aab4222c2199b1221f8f5b94a5971e88 not found: ID does not exist" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.275291 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.198703035 podStartE2EDuration="3.275272851s" podCreationTimestamp="2026-01-23 18:23:12 +0000 UTC" firstStartedPulling="2026-01-23 18:23:13.038194862 +0000 UTC m=+1336.040652795" lastFinishedPulling="2026-01-23 18:23:14.114764678 +0000 UTC m=+1337.117222611" observedRunningTime="2026-01-23 18:23:15.271906181 +0000 UTC m=+1338.274364114" watchObservedRunningTime="2026-01-23 18:23:15.275272851 +0000 UTC m=+1338.277730784" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.308359 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.308343102 podStartE2EDuration="4.308343102s" podCreationTimestamp="2026-01-23 18:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:15.297839013 +0000 UTC m=+1338.300296946" watchObservedRunningTime="2026-01-23 18:23:15.308343102 +0000 UTC m=+1338.310801035" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.323997 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.335171 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.348363 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:15 crc kubenswrapper[4760]: E0123 18:23:15.349011 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ebc3dd-118f-40ff-8cb2-16ad4cb96db9" containerName="nova-scheduler-scheduler" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.349033 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ebc3dd-118f-40ff-8cb2-16ad4cb96db9" containerName="nova-scheduler-scheduler" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.349258 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ebc3dd-118f-40ff-8cb2-16ad4cb96db9" containerName="nova-scheduler-scheduler" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.350073 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.358665 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.375318 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.435788 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.435848 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-config-data\") pod \"nova-scheduler-0\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.435876 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84jj6\" (UniqueName: \"kubernetes.io/projected/001bcc50-de00-4540-ac51-6e97dfe5606f-kube-api-access-84jj6\") pod \"nova-scheduler-0\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.537218 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.537280 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-config-data\") pod \"nova-scheduler-0\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.537308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84jj6\" (UniqueName: \"kubernetes.io/projected/001bcc50-de00-4540-ac51-6e97dfe5606f-kube-api-access-84jj6\") pod \"nova-scheduler-0\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.541796 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-config-data\") pod \"nova-scheduler-0\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.559718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.561481 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84jj6\" (UniqueName: \"kubernetes.io/projected/001bcc50-de00-4540-ac51-6e97dfe5606f-kube-api-access-84jj6\") pod \"nova-scheduler-0\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.608268 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33ebc3dd-118f-40ff-8cb2-16ad4cb96db9" path="/var/lib/kubelet/pods/33ebc3dd-118f-40ff-8cb2-16ad4cb96db9/volumes" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.680204 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.695744 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.739066 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ppqq\" (UniqueName: \"kubernetes.io/projected/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-kube-api-access-2ppqq\") pod \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.739158 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-log-httpd\") pod \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.739236 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-sg-core-conf-yaml\") pod \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.739294 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-combined-ca-bundle\") pod \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.739359 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-config-data\") pod \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.739396 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-run-httpd\") pod \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.739435 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-scripts\") pod \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\" (UID: \"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489\") " Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.740780 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" (UID: "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.741896 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" (UID: "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.743372 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-scripts" (OuterVolumeSpecName: "scripts") pod "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" (UID: "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.743845 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-kube-api-access-2ppqq" (OuterVolumeSpecName: "kube-api-access-2ppqq") pod "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" (UID: "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489"). InnerVolumeSpecName "kube-api-access-2ppqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.771320 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" (UID: "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.829603 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" (UID: "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.842267 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.842299 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.842314 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.842323 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.842331 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.842339 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ppqq\" (UniqueName: \"kubernetes.io/projected/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-kube-api-access-2ppqq\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.853978 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-config-data" (OuterVolumeSpecName: "config-data") pod "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" (UID: "0c1f6c54-8d1c-4ac0-a964-333bf7a7a489"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:15 crc kubenswrapper[4760]: I0123 18:23:15.944253 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.136170 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.260579 4760 generic.go:334] "Generic (PLEG): container finished" podID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerID="94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3" exitCode=0 Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.260648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489","Type":"ContainerDied","Data":"94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3"} Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.260674 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c1f6c54-8d1c-4ac0-a964-333bf7a7a489","Type":"ContainerDied","Data":"1d8b217c793d86d958f7dd6447ab28afbdbca2e7d6f0d213f1a6f870d7b9da0c"} Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.260692 4760 scope.go:117] "RemoveContainer" containerID="15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.260693 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.262840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"001bcc50-de00-4540-ac51-6e97dfe5606f","Type":"ContainerStarted","Data":"e1266214b2936bd24233a4ca55127f6d9abe8486db5e80bd1e8a2f1e51d342e2"} Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.294954 4760 scope.go:117] "RemoveContainer" containerID="4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.295139 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.302228 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.321090 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:16 crc kubenswrapper[4760]: E0123 18:23:16.321473 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="sg-core" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.321493 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="sg-core" Jan 23 18:23:16 crc kubenswrapper[4760]: E0123 18:23:16.321519 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="ceilometer-notification-agent" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.321527 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="ceilometer-notification-agent" Jan 23 18:23:16 crc kubenswrapper[4760]: E0123 18:23:16.321545 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="proxy-httpd" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.321551 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="proxy-httpd" Jan 23 18:23:16 crc kubenswrapper[4760]: E0123 18:23:16.321566 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="ceilometer-central-agent" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.321571 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="ceilometer-central-agent" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.323049 4760 scope.go:117] "RemoveContainer" containerID="94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.324045 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="proxy-httpd" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.324071 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="sg-core" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.324084 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="ceilometer-central-agent" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.324106 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" containerName="ceilometer-notification-agent" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.325829 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.330817 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.330943 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.330995 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.338507 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.351472 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.351584 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-run-httpd\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.351623 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-scripts\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.351676 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-log-httpd\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.351704 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-config-data\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.351747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.351790 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5wk\" (UniqueName: \"kubernetes.io/projected/541d9bbc-c433-4e77-bac7-8bb3d5610879-kube-api-access-kt5wk\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.351857 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.362609 4760 scope.go:117] "RemoveContainer" containerID="221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.387393 4760 scope.go:117] "RemoveContainer" containerID="15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445" Jan 23 18:23:16 crc kubenswrapper[4760]: E0123 18:23:16.388334 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445\": container with ID starting with 15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445 not found: ID does not exist" containerID="15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.388390 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445"} err="failed to get container status \"15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445\": rpc error: code = NotFound desc = could not find container \"15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445\": container with ID starting with 15fdef09abeb7fa7a4e3e95273356949bac6fd9ff264ae3b1cbd83dd54028445 not found: ID does not exist" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.388526 4760 scope.go:117] "RemoveContainer" containerID="4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5" Jan 23 18:23:16 crc kubenswrapper[4760]: E0123 18:23:16.389319 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5\": container with ID starting with 4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5 not found: ID does not exist" containerID="4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.389384 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5"} err="failed to get container status \"4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5\": rpc error: code = NotFound desc = could not find container \"4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5\": container with ID starting with 4c1cfb01e691b487414a2758a737876de5736506adade164312f66d7165460c5 not found: ID does not exist" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.389457 4760 scope.go:117] "RemoveContainer" containerID="94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3" Jan 23 18:23:16 crc kubenswrapper[4760]: E0123 18:23:16.392900 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3\": container with ID starting with 94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3 not found: ID does not exist" containerID="94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.392935 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3"} err="failed to get container status \"94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3\": rpc error: code = NotFound desc = could not find container \"94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3\": container with ID starting with 94ec16182f9e966f808be0e69ddb280efd0e2e51e5be6961b8dc9be4c4c1cef3 not found: ID does not exist" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.392960 4760 scope.go:117] "RemoveContainer" containerID="221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e" Jan 23 18:23:16 crc kubenswrapper[4760]: E0123 18:23:16.393594 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e\": container with ID starting with 221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e not found: ID does not exist" containerID="221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.393641 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e"} err="failed to get container status \"221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e\": rpc error: code = NotFound desc = could not find container \"221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e\": container with ID starting with 221ab48a760020cb3da12f41a44f5717190e79da929f6581512cdb0392b6e44e not found: ID does not exist" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.453990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.454042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5wk\" (UniqueName: \"kubernetes.io/projected/541d9bbc-c433-4e77-bac7-8bb3d5610879-kube-api-access-kt5wk\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.454162 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.454237 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.454289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-run-httpd\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.454341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-scripts\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.454567 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-log-httpd\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.454638 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-config-data\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.454933 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-run-httpd\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.454986 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-log-httpd\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.459689 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-scripts\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.460891 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-config-data\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.461748 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.461778 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.465296 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.474046 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5wk\" (UniqueName: \"kubernetes.io/projected/541d9bbc-c433-4e77-bac7-8bb3d5610879-kube-api-access-kt5wk\") pod \"ceilometer-0\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " pod="openstack/ceilometer-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.558155 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.558200 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:23:16 crc kubenswrapper[4760]: I0123 18:23:16.651083 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:23:17 crc kubenswrapper[4760]: I0123 18:23:17.108703 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:17 crc kubenswrapper[4760]: W0123 18:23:17.126238 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod541d9bbc_c433_4e77_bac7_8bb3d5610879.slice/crio-d294a55bcb08cba649df9834a2ddc64a90fdafbb7c98acaeed98f52bc56f1798 WatchSource:0}: Error finding container d294a55bcb08cba649df9834a2ddc64a90fdafbb7c98acaeed98f52bc56f1798: Status 404 returned error can't find the container with id d294a55bcb08cba649df9834a2ddc64a90fdafbb7c98acaeed98f52bc56f1798 Jan 23 18:23:17 crc kubenswrapper[4760]: I0123 18:23:17.272039 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"001bcc50-de00-4540-ac51-6e97dfe5606f","Type":"ContainerStarted","Data":"daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050"} Jan 23 18:23:17 crc kubenswrapper[4760]: I0123 18:23:17.274436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"541d9bbc-c433-4e77-bac7-8bb3d5610879","Type":"ContainerStarted","Data":"d294a55bcb08cba649df9834a2ddc64a90fdafbb7c98acaeed98f52bc56f1798"} Jan 23 18:23:17 crc kubenswrapper[4760]: I0123 18:23:17.278218 4760 generic.go:334] "Generic (PLEG): container finished" podID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerID="082d8ff38244a12e1f0f5c23fd2682351b562274ebdff40ca38711a016291120" exitCode=0 Jan 23 18:23:17 crc kubenswrapper[4760]: I0123 18:23:17.279269 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a44844-5883-4db5-bd19-051e15f7c1a6","Type":"ContainerDied","Data":"082d8ff38244a12e1f0f5c23fd2682351b562274ebdff40ca38711a016291120"} Jan 23 18:23:17 crc kubenswrapper[4760]: I0123 18:23:17.296222 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.296203131 podStartE2EDuration="2.296203131s" podCreationTimestamp="2026-01-23 18:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:17.285130566 +0000 UTC m=+1340.287588509" watchObservedRunningTime="2026-01-23 18:23:17.296203131 +0000 UTC m=+1340.298661064" Jan 23 18:23:17 crc kubenswrapper[4760]: I0123 18:23:17.656686 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c1f6c54-8d1c-4ac0-a964-333bf7a7a489" path="/var/lib/kubelet/pods/0c1f6c54-8d1c-4ac0-a964-333bf7a7a489/volumes" Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.551737 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.589064 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czsz2\" (UniqueName: \"kubernetes.io/projected/57a44844-5883-4db5-bd19-051e15f7c1a6-kube-api-access-czsz2\") pod \"57a44844-5883-4db5-bd19-051e15f7c1a6\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.589433 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-config-data\") pod \"57a44844-5883-4db5-bd19-051e15f7c1a6\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.592972 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a44844-5883-4db5-bd19-051e15f7c1a6-kube-api-access-czsz2" (OuterVolumeSpecName: "kube-api-access-czsz2") pod "57a44844-5883-4db5-bd19-051e15f7c1a6" (UID: "57a44844-5883-4db5-bd19-051e15f7c1a6"). InnerVolumeSpecName "kube-api-access-czsz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.614594 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-config-data" (OuterVolumeSpecName: "config-data") pod "57a44844-5883-4db5-bd19-051e15f7c1a6" (UID: "57a44844-5883-4db5-bd19-051e15f7c1a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.691054 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a44844-5883-4db5-bd19-051e15f7c1a6-logs\") pod \"57a44844-5883-4db5-bd19-051e15f7c1a6\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.691199 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-combined-ca-bundle\") pod \"57a44844-5883-4db5-bd19-051e15f7c1a6\" (UID: \"57a44844-5883-4db5-bd19-051e15f7c1a6\") " Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.691602 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a44844-5883-4db5-bd19-051e15f7c1a6-logs" (OuterVolumeSpecName: "logs") pod "57a44844-5883-4db5-bd19-051e15f7c1a6" (UID: "57a44844-5883-4db5-bd19-051e15f7c1a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.691890 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a44844-5883-4db5-bd19-051e15f7c1a6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.691909 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czsz2\" (UniqueName: \"kubernetes.io/projected/57a44844-5883-4db5-bd19-051e15f7c1a6-kube-api-access-czsz2\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.691919 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.719083 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57a44844-5883-4db5-bd19-051e15f7c1a6" (UID: "57a44844-5883-4db5-bd19-051e15f7c1a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:18 crc kubenswrapper[4760]: I0123 18:23:18.793426 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a44844-5883-4db5-bd19-051e15f7c1a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.310714 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a44844-5883-4db5-bd19-051e15f7c1a6","Type":"ContainerDied","Data":"c002c0a8fc7bbd75d081a55be28a701a47ecf3a7e42aaa9a7f22cadc199b6fec"} Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.310742 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.310774 4760 scope.go:117] "RemoveContainer" containerID="082d8ff38244a12e1f0f5c23fd2682351b562274ebdff40ca38711a016291120" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.314398 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"541d9bbc-c433-4e77-bac7-8bb3d5610879","Type":"ContainerStarted","Data":"97348cc38a5e3478fe29c8796db3fa81598a7f3cec13839426219f164ffeea70"} Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.314540 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"541d9bbc-c433-4e77-bac7-8bb3d5610879","Type":"ContainerStarted","Data":"f63f084670e68826eb62edf1a3baffdfbc6b2faa7d6601b6fe738dbb3a0a67ce"} Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.366130 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.368503 4760 scope.go:117] "RemoveContainer" containerID="20a97125458e772f41ccddba41e211d07ff3e5cfe1d050be5496af5c0b394de1" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.374174 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.420466 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:19 crc kubenswrapper[4760]: E0123 18:23:19.421130 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerName="nova-api-log" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.421218 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerName="nova-api-log" Jan 23 18:23:19 crc kubenswrapper[4760]: E0123 18:23:19.421305 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerName="nova-api-api" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.421399 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerName="nova-api-api" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.421726 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerName="nova-api-api" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.421830 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" containerName="nova-api-log" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.423300 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.428673 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.429193 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.607052 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a44844-5883-4db5-bd19-051e15f7c1a6" path="/var/lib/kubelet/pods/57a44844-5883-4db5-bd19-051e15f7c1a6/volumes" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.610638 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-config-data\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.610775 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.611160 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbpn9\" (UniqueName: \"kubernetes.io/projected/3467c545-8724-467f-8ca6-8711a7307545-kube-api-access-rbpn9\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.611251 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3467c545-8724-467f-8ca6-8711a7307545-logs\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.712369 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbpn9\" (UniqueName: \"kubernetes.io/projected/3467c545-8724-467f-8ca6-8711a7307545-kube-api-access-rbpn9\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.712459 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3467c545-8724-467f-8ca6-8711a7307545-logs\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.712509 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-config-data\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.712546 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.713302 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3467c545-8724-467f-8ca6-8711a7307545-logs\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.717141 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-config-data\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.718018 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.734034 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbpn9\" (UniqueName: \"kubernetes.io/projected/3467c545-8724-467f-8ca6-8711a7307545-kube-api-access-rbpn9\") pod \"nova-api-0\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " pod="openstack/nova-api-0" Jan 23 18:23:19 crc kubenswrapper[4760]: I0123 18:23:19.759043 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:20 crc kubenswrapper[4760]: I0123 18:23:20.216364 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:20 crc kubenswrapper[4760]: W0123 18:23:20.216439 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3467c545_8724_467f_8ca6_8711a7307545.slice/crio-ed96e8a70c62d6db2331512a27499941cadb993347c49b04f2e7e96a02b535d5 WatchSource:0}: Error finding container ed96e8a70c62d6db2331512a27499941cadb993347c49b04f2e7e96a02b535d5: Status 404 returned error can't find the container with id ed96e8a70c62d6db2331512a27499941cadb993347c49b04f2e7e96a02b535d5 Jan 23 18:23:20 crc kubenswrapper[4760]: I0123 18:23:20.334696 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3467c545-8724-467f-8ca6-8711a7307545","Type":"ContainerStarted","Data":"ed96e8a70c62d6db2331512a27499941cadb993347c49b04f2e7e96a02b535d5"} Jan 23 18:23:20 crc kubenswrapper[4760]: I0123 18:23:20.337935 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"541d9bbc-c433-4e77-bac7-8bb3d5610879","Type":"ContainerStarted","Data":"dbb4bece72ac8859809c295af3a274051db52b5799e6d36c33234013b12f5d21"} Jan 23 18:23:20 crc kubenswrapper[4760]: I0123 18:23:20.681073 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 18:23:21 crc kubenswrapper[4760]: I0123 18:23:21.349349 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"541d9bbc-c433-4e77-bac7-8bb3d5610879","Type":"ContainerStarted","Data":"36f69e732b479b2b813f5ee229824924d7073cfd02d602339522d8bbe8e97ab3"} Jan 23 18:23:21 crc kubenswrapper[4760]: I0123 18:23:21.349853 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:23:21 crc kubenswrapper[4760]: I0123 18:23:21.351275 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3467c545-8724-467f-8ca6-8711a7307545","Type":"ContainerStarted","Data":"d4b0ed15ef483e372df5902ce7fb07b08c0e2c42f0a9193c85ed87336d443a25"} Jan 23 18:23:21 crc kubenswrapper[4760]: I0123 18:23:21.351309 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3467c545-8724-467f-8ca6-8711a7307545","Type":"ContainerStarted","Data":"c86625f94eb1fbd366acd5710a25fa5ccbe5ab7354246856e092db21eb5e430c"} Jan 23 18:23:21 crc kubenswrapper[4760]: I0123 18:23:21.371120 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.475277921 podStartE2EDuration="5.371100669s" podCreationTimestamp="2026-01-23 18:23:16 +0000 UTC" firstStartedPulling="2026-01-23 18:23:17.128618265 +0000 UTC m=+1340.131076198" lastFinishedPulling="2026-01-23 18:23:21.024441013 +0000 UTC m=+1344.026898946" observedRunningTime="2026-01-23 18:23:21.36846839 +0000 UTC m=+1344.370926343" watchObservedRunningTime="2026-01-23 18:23:21.371100669 +0000 UTC m=+1344.373558602" Jan 23 18:23:21 crc kubenswrapper[4760]: I0123 18:23:21.393614 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3935944989999998 podStartE2EDuration="2.393594499s" podCreationTimestamp="2026-01-23 18:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:21.385801502 +0000 UTC m=+1344.388259445" watchObservedRunningTime="2026-01-23 18:23:21.393594499 +0000 UTC m=+1344.396052432" Jan 23 18:23:21 crc kubenswrapper[4760]: I0123 18:23:21.558608 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 18:23:21 crc kubenswrapper[4760]: I0123 18:23:21.558659 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 18:23:21 crc kubenswrapper[4760]: I0123 18:23:21.583893 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 18:23:22 crc kubenswrapper[4760]: I0123 18:23:22.586662 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:23:22 crc kubenswrapper[4760]: I0123 18:23:22.586701 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:23:22 crc kubenswrapper[4760]: I0123 18:23:22.595782 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 18:23:25 crc kubenswrapper[4760]: I0123 18:23:25.681163 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 18:23:25 crc kubenswrapper[4760]: I0123 18:23:25.721088 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 18:23:26 crc kubenswrapper[4760]: I0123 18:23:26.444096 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 18:23:29 crc kubenswrapper[4760]: I0123 18:23:29.760001 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:23:29 crc kubenswrapper[4760]: I0123 18:23:29.760294 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:23:30 crc kubenswrapper[4760]: I0123 18:23:30.841657 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3467c545-8724-467f-8ca6-8711a7307545" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:23:30 crc kubenswrapper[4760]: I0123 18:23:30.841692 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3467c545-8724-467f-8ca6-8711a7307545" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:23:31 crc kubenswrapper[4760]: I0123 18:23:31.562823 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 18:23:31 crc kubenswrapper[4760]: I0123 18:23:31.567339 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 18:23:31 crc kubenswrapper[4760]: I0123 18:23:31.574458 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 18:23:32 crc kubenswrapper[4760]: I0123 18:23:32.458152 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.411275 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.466921 4760 generic.go:334] "Generic (PLEG): container finished" podID="8cbd4dc9-434e-44e4-b584-3644467dded9" containerID="783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7" exitCode=137 Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.467187 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.467856 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8cbd4dc9-434e-44e4-b584-3644467dded9","Type":"ContainerDied","Data":"783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7"} Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.467895 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8cbd4dc9-434e-44e4-b584-3644467dded9","Type":"ContainerDied","Data":"8b7f3784aa4fad3ebbc318a5a3643294b32388d9ac6a114757a5919eccff5446"} Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.467914 4760 scope.go:117] "RemoveContainer" containerID="783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.487846 4760 scope.go:117] "RemoveContainer" containerID="783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7" Jan 23 18:23:34 crc kubenswrapper[4760]: E0123 18:23:34.488604 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7\": container with ID starting with 783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7 not found: ID does not exist" containerID="783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.488649 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7"} err="failed to get container status \"783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7\": rpc error: code = NotFound desc = could not find container \"783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7\": container with ID starting with 783861113abe1b405d1eef17ad97584d5a5954c7477fa2f0b320de76e06642b7 not found: ID does not exist" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.509903 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6j6n\" (UniqueName: \"kubernetes.io/projected/8cbd4dc9-434e-44e4-b584-3644467dded9-kube-api-access-q6j6n\") pod \"8cbd4dc9-434e-44e4-b584-3644467dded9\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.510024 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-combined-ca-bundle\") pod \"8cbd4dc9-434e-44e4-b584-3644467dded9\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.510052 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-config-data\") pod \"8cbd4dc9-434e-44e4-b584-3644467dded9\" (UID: \"8cbd4dc9-434e-44e4-b584-3644467dded9\") " Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.516521 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cbd4dc9-434e-44e4-b584-3644467dded9-kube-api-access-q6j6n" (OuterVolumeSpecName: "kube-api-access-q6j6n") pod "8cbd4dc9-434e-44e4-b584-3644467dded9" (UID: "8cbd4dc9-434e-44e4-b584-3644467dded9"). InnerVolumeSpecName "kube-api-access-q6j6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.536830 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-config-data" (OuterVolumeSpecName: "config-data") pod "8cbd4dc9-434e-44e4-b584-3644467dded9" (UID: "8cbd4dc9-434e-44e4-b584-3644467dded9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.541491 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cbd4dc9-434e-44e4-b584-3644467dded9" (UID: "8cbd4dc9-434e-44e4-b584-3644467dded9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.613253 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.613312 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cbd4dc9-434e-44e4-b584-3644467dded9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.613333 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6j6n\" (UniqueName: \"kubernetes.io/projected/8cbd4dc9-434e-44e4-b584-3644467dded9-kube-api-access-q6j6n\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.816623 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.826325 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.847995 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:23:34 crc kubenswrapper[4760]: E0123 18:23:34.848562 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbd4dc9-434e-44e4-b584-3644467dded9" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.848592 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbd4dc9-434e-44e4-b584-3644467dded9" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.848941 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cbd4dc9-434e-44e4-b584-3644467dded9" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.849873 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.857374 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.857450 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.857724 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.868306 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.918703 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.918802 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.918849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.918913 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:34 crc kubenswrapper[4760]: I0123 18:23:34.918998 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4dst\" (UniqueName: \"kubernetes.io/projected/6b36e62e-e56c-4619-a885-dc26d824c2ed-kube-api-access-f4dst\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.020598 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.020673 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.020761 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.020813 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4dst\" (UniqueName: \"kubernetes.io/projected/6b36e62e-e56c-4619-a885-dc26d824c2ed-kube-api-access-f4dst\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.020900 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.026397 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.026422 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.027338 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.028646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b36e62e-e56c-4619-a885-dc26d824c2ed-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.045457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4dst\" (UniqueName: \"kubernetes.io/projected/6b36e62e-e56c-4619-a885-dc26d824c2ed-kube-api-access-f4dst\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b36e62e-e56c-4619-a885-dc26d824c2ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.171936 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.605677 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cbd4dc9-434e-44e4-b584-3644467dded9" path="/var/lib/kubelet/pods/8cbd4dc9-434e-44e4-b584-3644467dded9/volumes" Jan 23 18:23:35 crc kubenswrapper[4760]: I0123 18:23:35.654954 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:23:36 crc kubenswrapper[4760]: I0123 18:23:36.488373 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b36e62e-e56c-4619-a885-dc26d824c2ed","Type":"ContainerStarted","Data":"6c35376faf7a8b4755cc912b82533d1cbdfe5e114d75df6bf78216cd6d8e5a05"} Jan 23 18:23:36 crc kubenswrapper[4760]: I0123 18:23:36.488677 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b36e62e-e56c-4619-a885-dc26d824c2ed","Type":"ContainerStarted","Data":"8b4b5f670d974eef3ab3abac41ad56d5ef0faea6c44fedb7b5d73cd5a2e2159e"} Jan 23 18:23:36 crc kubenswrapper[4760]: I0123 18:23:36.522820 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.52279105 podStartE2EDuration="2.52279105s" podCreationTimestamp="2026-01-23 18:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:36.512512892 +0000 UTC m=+1359.514970825" watchObservedRunningTime="2026-01-23 18:23:36.52279105 +0000 UTC m=+1359.525249003" Jan 23 18:23:39 crc kubenswrapper[4760]: I0123 18:23:39.764034 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 18:23:39 crc kubenswrapper[4760]: I0123 18:23:39.764917 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 18:23:39 crc kubenswrapper[4760]: I0123 18:23:39.764995 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 18:23:39 crc kubenswrapper[4760]: I0123 18:23:39.768441 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.172848 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.525003 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.528900 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.728609 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ls9tt"] Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.734377 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.750645 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ls9tt"] Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.836000 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv442\" (UniqueName: \"kubernetes.io/projected/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-kube-api-access-sv442\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.836272 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-dns-svc\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.836324 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.836394 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.836602 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-config\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.938175 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-dns-svc\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.938238 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.938276 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.938350 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-config\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.938402 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv442\" (UniqueName: \"kubernetes.io/projected/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-kube-api-access-sv442\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.939218 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-dns-svc\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.939247 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.939449 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.939469 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-config\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:40 crc kubenswrapper[4760]: I0123 18:23:40.960612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv442\" (UniqueName: \"kubernetes.io/projected/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-kube-api-access-sv442\") pod \"dnsmasq-dns-5b856c5697-ls9tt\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:41 crc kubenswrapper[4760]: I0123 18:23:41.054397 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:41 crc kubenswrapper[4760]: I0123 18:23:41.562810 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ls9tt"] Jan 23 18:23:42 crc kubenswrapper[4760]: I0123 18:23:42.542712 4760 generic.go:334] "Generic (PLEG): container finished" podID="adfa5ed3-d512-4e7a-b912-54a3a7882a0e" containerID="b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39" exitCode=0 Jan 23 18:23:42 crc kubenswrapper[4760]: I0123 18:23:42.542827 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" event={"ID":"adfa5ed3-d512-4e7a-b912-54a3a7882a0e","Type":"ContainerDied","Data":"b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39"} Jan 23 18:23:42 crc kubenswrapper[4760]: I0123 18:23:42.543857 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" event={"ID":"adfa5ed3-d512-4e7a-b912-54a3a7882a0e","Type":"ContainerStarted","Data":"e371438e2a01402d5cff8ee4ea45a25301a7fa02124e9416e4da8d3652b70469"} Jan 23 18:23:42 crc kubenswrapper[4760]: I0123 18:23:42.638578 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:42 crc kubenswrapper[4760]: I0123 18:23:42.639235 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="ceilometer-central-agent" containerID="cri-o://f63f084670e68826eb62edf1a3baffdfbc6b2faa7d6601b6fe738dbb3a0a67ce" gracePeriod=30 Jan 23 18:23:42 crc kubenswrapper[4760]: I0123 18:23:42.639396 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="ceilometer-notification-agent" containerID="cri-o://97348cc38a5e3478fe29c8796db3fa81598a7f3cec13839426219f164ffeea70" gracePeriod=30 Jan 23 18:23:42 crc kubenswrapper[4760]: I0123 18:23:42.639528 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="proxy-httpd" containerID="cri-o://36f69e732b479b2b813f5ee229824924d7073cfd02d602339522d8bbe8e97ab3" gracePeriod=30 Jan 23 18:23:42 crc kubenswrapper[4760]: I0123 18:23:42.639335 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="sg-core" containerID="cri-o://dbb4bece72ac8859809c295af3a274051db52b5799e6d36c33234013b12f5d21" gracePeriod=30 Jan 23 18:23:42 crc kubenswrapper[4760]: I0123 18:23:42.662999 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 23 18:23:42 crc kubenswrapper[4760]: I0123 18:23:42.786144 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.554732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" event={"ID":"adfa5ed3-d512-4e7a-b912-54a3a7882a0e","Type":"ContainerStarted","Data":"c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab"} Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.554885 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.558884 4760 generic.go:334] "Generic (PLEG): container finished" podID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerID="36f69e732b479b2b813f5ee229824924d7073cfd02d602339522d8bbe8e97ab3" exitCode=0 Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.558932 4760 generic.go:334] "Generic (PLEG): container finished" podID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerID="dbb4bece72ac8859809c295af3a274051db52b5799e6d36c33234013b12f5d21" exitCode=2 Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.558949 4760 generic.go:334] "Generic (PLEG): container finished" podID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerID="f63f084670e68826eb62edf1a3baffdfbc6b2faa7d6601b6fe738dbb3a0a67ce" exitCode=0 Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.558955 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"541d9bbc-c433-4e77-bac7-8bb3d5610879","Type":"ContainerDied","Data":"36f69e732b479b2b813f5ee229824924d7073cfd02d602339522d8bbe8e97ab3"} Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.559005 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"541d9bbc-c433-4e77-bac7-8bb3d5610879","Type":"ContainerDied","Data":"dbb4bece72ac8859809c295af3a274051db52b5799e6d36c33234013b12f5d21"} Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.559020 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"541d9bbc-c433-4e77-bac7-8bb3d5610879","Type":"ContainerDied","Data":"f63f084670e68826eb62edf1a3baffdfbc6b2faa7d6601b6fe738dbb3a0a67ce"} Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.559177 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3467c545-8724-467f-8ca6-8711a7307545" containerName="nova-api-log" containerID="cri-o://c86625f94eb1fbd366acd5710a25fa5ccbe5ab7354246856e092db21eb5e430c" gracePeriod=30 Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.559240 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3467c545-8724-467f-8ca6-8711a7307545" containerName="nova-api-api" containerID="cri-o://d4b0ed15ef483e372df5902ce7fb07b08c0e2c42f0a9193c85ed87336d443a25" gracePeriod=30 Jan 23 18:23:43 crc kubenswrapper[4760]: I0123 18:23:43.592916 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" podStartSLOduration=3.592883387 podStartE2EDuration="3.592883387s" podCreationTimestamp="2026-01-23 18:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:43.576237351 +0000 UTC m=+1366.578695284" watchObservedRunningTime="2026-01-23 18:23:43.592883387 +0000 UTC m=+1366.595341320" Jan 23 18:23:44 crc kubenswrapper[4760]: I0123 18:23:44.568523 4760 generic.go:334] "Generic (PLEG): container finished" podID="3467c545-8724-467f-8ca6-8711a7307545" containerID="c86625f94eb1fbd366acd5710a25fa5ccbe5ab7354246856e092db21eb5e430c" exitCode=143 Jan 23 18:23:44 crc kubenswrapper[4760]: I0123 18:23:44.568613 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3467c545-8724-467f-8ca6-8711a7307545","Type":"ContainerDied","Data":"c86625f94eb1fbd366acd5710a25fa5ccbe5ab7354246856e092db21eb5e430c"} Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.172748 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.194059 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.589377 4760 generic.go:334] "Generic (PLEG): container finished" podID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerID="97348cc38a5e3478fe29c8796db3fa81598a7f3cec13839426219f164ffeea70" exitCode=0 Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.589450 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"541d9bbc-c433-4e77-bac7-8bb3d5610879","Type":"ContainerDied","Data":"97348cc38a5e3478fe29c8796db3fa81598a7f3cec13839426219f164ffeea70"} Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.589777 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"541d9bbc-c433-4e77-bac7-8bb3d5610879","Type":"ContainerDied","Data":"d294a55bcb08cba649df9834a2ddc64a90fdafbb7c98acaeed98f52bc56f1798"} Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.589800 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d294a55bcb08cba649df9834a2ddc64a90fdafbb7c98acaeed98f52bc56f1798" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.623272 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.671026 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.723877 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-sg-core-conf-yaml\") pod \"541d9bbc-c433-4e77-bac7-8bb3d5610879\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.723957 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-config-data\") pod \"541d9bbc-c433-4e77-bac7-8bb3d5610879\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.724012 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5wk\" (UniqueName: \"kubernetes.io/projected/541d9bbc-c433-4e77-bac7-8bb3d5610879-kube-api-access-kt5wk\") pod \"541d9bbc-c433-4e77-bac7-8bb3d5610879\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.724048 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-ceilometer-tls-certs\") pod \"541d9bbc-c433-4e77-bac7-8bb3d5610879\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.724068 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-log-httpd\") pod \"541d9bbc-c433-4e77-bac7-8bb3d5610879\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.724095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-combined-ca-bundle\") pod \"541d9bbc-c433-4e77-bac7-8bb3d5610879\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.724123 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-run-httpd\") pod \"541d9bbc-c433-4e77-bac7-8bb3d5610879\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.724239 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-scripts\") pod \"541d9bbc-c433-4e77-bac7-8bb3d5610879\" (UID: \"541d9bbc-c433-4e77-bac7-8bb3d5610879\") " Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.724574 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "541d9bbc-c433-4e77-bac7-8bb3d5610879" (UID: "541d9bbc-c433-4e77-bac7-8bb3d5610879"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.725794 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.726209 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "541d9bbc-c433-4e77-bac7-8bb3d5610879" (UID: "541d9bbc-c433-4e77-bac7-8bb3d5610879"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.733727 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-scripts" (OuterVolumeSpecName: "scripts") pod "541d9bbc-c433-4e77-bac7-8bb3d5610879" (UID: "541d9bbc-c433-4e77-bac7-8bb3d5610879"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.736109 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541d9bbc-c433-4e77-bac7-8bb3d5610879-kube-api-access-kt5wk" (OuterVolumeSpecName: "kube-api-access-kt5wk") pod "541d9bbc-c433-4e77-bac7-8bb3d5610879" (UID: "541d9bbc-c433-4e77-bac7-8bb3d5610879"). InnerVolumeSpecName "kube-api-access-kt5wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.751968 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "541d9bbc-c433-4e77-bac7-8bb3d5610879" (UID: "541d9bbc-c433-4e77-bac7-8bb3d5610879"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.778180 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "541d9bbc-c433-4e77-bac7-8bb3d5610879" (UID: "541d9bbc-c433-4e77-bac7-8bb3d5610879"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.824622 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "541d9bbc-c433-4e77-bac7-8bb3d5610879" (UID: "541d9bbc-c433-4e77-bac7-8bb3d5610879"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.824698 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lhxxf"] Jan 23 18:23:45 crc kubenswrapper[4760]: E0123 18:23:45.825509 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="sg-core" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.825554 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="sg-core" Jan 23 18:23:45 crc kubenswrapper[4760]: E0123 18:23:45.825619 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="ceilometer-notification-agent" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.825630 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="ceilometer-notification-agent" Jan 23 18:23:45 crc kubenswrapper[4760]: E0123 18:23:45.826289 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="proxy-httpd" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.826300 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="proxy-httpd" Jan 23 18:23:45 crc kubenswrapper[4760]: E0123 18:23:45.826317 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="ceilometer-central-agent" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.826325 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="ceilometer-central-agent" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.826849 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="proxy-httpd" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.826869 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="ceilometer-central-agent" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.826913 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="sg-core" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.826950 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" containerName="ceilometer-notification-agent" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.827379 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5wk\" (UniqueName: \"kubernetes.io/projected/541d9bbc-c433-4e77-bac7-8bb3d5610879-kube-api-access-kt5wk\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.827452 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.827466 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.827478 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/541d9bbc-c433-4e77-bac7-8bb3d5610879-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.827492 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.827505 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.828305 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.831626 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.831861 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.833664 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lhxxf"] Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.859641 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-config-data" (OuterVolumeSpecName: "config-data") pod "541d9bbc-c433-4e77-bac7-8bb3d5610879" (UID: "541d9bbc-c433-4e77-bac7-8bb3d5610879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.930371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.930814 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x79g\" (UniqueName: \"kubernetes.io/projected/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-kube-api-access-7x79g\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.930911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-scripts\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.930945 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-config-data\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:45 crc kubenswrapper[4760]: I0123 18:23:45.931171 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541d9bbc-c433-4e77-bac7-8bb3d5610879-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.032521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.032716 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x79g\" (UniqueName: \"kubernetes.io/projected/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-kube-api-access-7x79g\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.032747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-scripts\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.032777 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-config-data\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.036388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-scripts\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.036909 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.037496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-config-data\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.048648 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x79g\" (UniqueName: \"kubernetes.io/projected/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-kube-api-access-7x79g\") pod \"nova-cell1-cell-mapping-lhxxf\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.076090 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.076156 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.151954 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.591558 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lhxxf"] Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.599299 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lhxxf" event={"ID":"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59","Type":"ContainerStarted","Data":"08fd7dd9904c959eddac829a55a0db6f7355c82fea8a61d3c78cf6f256da4034"} Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.599531 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.759115 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.767632 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.790475 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.792486 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.795543 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.795898 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.796010 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.821000 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.845330 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-config-data\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.845387 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-log-httpd\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.845459 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.845488 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qpfj\" (UniqueName: \"kubernetes.io/projected/127ea512-daf4-4310-b214-6ce12ba9adad-kube-api-access-6qpfj\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.845504 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.845528 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-run-httpd\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.845544 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-scripts\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.845578 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.947136 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qpfj\" (UniqueName: \"kubernetes.io/projected/127ea512-daf4-4310-b214-6ce12ba9adad-kube-api-access-6qpfj\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.947564 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.947750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-run-httpd\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.947910 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-scripts\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.948143 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.948301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-run-httpd\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.948485 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-config-data\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.948581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-log-httpd\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.948745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.952725 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-log-httpd\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.953780 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.953801 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.954476 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.954604 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-scripts\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.960483 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-config-data\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:46 crc kubenswrapper[4760]: I0123 18:23:46.973798 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qpfj\" (UniqueName: \"kubernetes.io/projected/127ea512-daf4-4310-b214-6ce12ba9adad-kube-api-access-6qpfj\") pod \"ceilometer-0\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " pod="openstack/ceilometer-0" Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.196771 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.618739 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541d9bbc-c433-4e77-bac7-8bb3d5610879" path="/var/lib/kubelet/pods/541d9bbc-c433-4e77-bac7-8bb3d5610879/volumes" Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.621004 4760 generic.go:334] "Generic (PLEG): container finished" podID="3467c545-8724-467f-8ca6-8711a7307545" containerID="d4b0ed15ef483e372df5902ce7fb07b08c0e2c42f0a9193c85ed87336d443a25" exitCode=0 Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.621080 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3467c545-8724-467f-8ca6-8711a7307545","Type":"ContainerDied","Data":"d4b0ed15ef483e372df5902ce7fb07b08c0e2c42f0a9193c85ed87336d443a25"} Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.623394 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lhxxf" event={"ID":"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59","Type":"ContainerStarted","Data":"ea0873daf47c8302d4dd187b05afc8c846e5c646ed98de0a787d6749e372317c"} Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.651149 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lhxxf" podStartSLOduration=2.651130257 podStartE2EDuration="2.651130257s" podCreationTimestamp="2026-01-23 18:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:47.645117388 +0000 UTC m=+1370.647575311" watchObservedRunningTime="2026-01-23 18:23:47.651130257 +0000 UTC m=+1370.653588190" Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.706018 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:23:47 crc kubenswrapper[4760]: W0123 18:23:47.717360 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127ea512_daf4_4310_b214_6ce12ba9adad.slice/crio-18d9d365af20e70d16c3ff9495618a97467cc7eed705a0ad9f41e997108cef49 WatchSource:0}: Error finding container 18d9d365af20e70d16c3ff9495618a97467cc7eed705a0ad9f41e997108cef49: Status 404 returned error can't find the container with id 18d9d365af20e70d16c3ff9495618a97467cc7eed705a0ad9f41e997108cef49 Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.720111 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.804109 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.915133 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3467c545-8724-467f-8ca6-8711a7307545-logs\") pod \"3467c545-8724-467f-8ca6-8711a7307545\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.915663 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-combined-ca-bundle\") pod \"3467c545-8724-467f-8ca6-8711a7307545\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.915812 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbpn9\" (UniqueName: \"kubernetes.io/projected/3467c545-8724-467f-8ca6-8711a7307545-kube-api-access-rbpn9\") pod \"3467c545-8724-467f-8ca6-8711a7307545\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.915845 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-config-data\") pod \"3467c545-8724-467f-8ca6-8711a7307545\" (UID: \"3467c545-8724-467f-8ca6-8711a7307545\") " Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.915906 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3467c545-8724-467f-8ca6-8711a7307545-logs" (OuterVolumeSpecName: "logs") pod "3467c545-8724-467f-8ca6-8711a7307545" (UID: "3467c545-8724-467f-8ca6-8711a7307545"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.916367 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3467c545-8724-467f-8ca6-8711a7307545-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.921212 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3467c545-8724-467f-8ca6-8711a7307545-kube-api-access-rbpn9" (OuterVolumeSpecName: "kube-api-access-rbpn9") pod "3467c545-8724-467f-8ca6-8711a7307545" (UID: "3467c545-8724-467f-8ca6-8711a7307545"). InnerVolumeSpecName "kube-api-access-rbpn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.949649 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-config-data" (OuterVolumeSpecName: "config-data") pod "3467c545-8724-467f-8ca6-8711a7307545" (UID: "3467c545-8724-467f-8ca6-8711a7307545"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:47 crc kubenswrapper[4760]: I0123 18:23:47.955240 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3467c545-8724-467f-8ca6-8711a7307545" (UID: "3467c545-8724-467f-8ca6-8711a7307545"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.022048 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.022081 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbpn9\" (UniqueName: \"kubernetes.io/projected/3467c545-8724-467f-8ca6-8711a7307545-kube-api-access-rbpn9\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.022092 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3467c545-8724-467f-8ca6-8711a7307545-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.634271 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.634397 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3467c545-8724-467f-8ca6-8711a7307545","Type":"ContainerDied","Data":"ed96e8a70c62d6db2331512a27499941cadb993347c49b04f2e7e96a02b535d5"} Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.634643 4760 scope.go:117] "RemoveContainer" containerID="d4b0ed15ef483e372df5902ce7fb07b08c0e2c42f0a9193c85ed87336d443a25" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.636632 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"127ea512-daf4-4310-b214-6ce12ba9adad","Type":"ContainerStarted","Data":"18d9d365af20e70d16c3ff9495618a97467cc7eed705a0ad9f41e997108cef49"} Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.678631 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.686787 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.688063 4760 scope.go:117] "RemoveContainer" containerID="c86625f94eb1fbd366acd5710a25fa5ccbe5ab7354246856e092db21eb5e430c" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.765446 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:48 crc kubenswrapper[4760]: E0123 18:23:48.765804 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3467c545-8724-467f-8ca6-8711a7307545" containerName="nova-api-log" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.765825 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3467c545-8724-467f-8ca6-8711a7307545" containerName="nova-api-log" Jan 23 18:23:48 crc kubenswrapper[4760]: E0123 18:23:48.765840 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3467c545-8724-467f-8ca6-8711a7307545" containerName="nova-api-api" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.765848 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3467c545-8724-467f-8ca6-8711a7307545" containerName="nova-api-api" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.766026 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3467c545-8724-467f-8ca6-8711a7307545" containerName="nova-api-log" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.766054 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3467c545-8724-467f-8ca6-8711a7307545" containerName="nova-api-api" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.767247 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.770013 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.770271 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.770287 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.779395 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.943541 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.943616 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.943650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.943684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-logs\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.943752 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-config-data\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:48 crc kubenswrapper[4760]: I0123 18:23:48.943774 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dkk4\" (UniqueName: \"kubernetes.io/projected/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-kube-api-access-4dkk4\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.045767 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.045867 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.045933 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-logs\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.046014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-config-data\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.046056 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dkk4\" (UniqueName: \"kubernetes.io/projected/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-kube-api-access-4dkk4\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.046266 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.047025 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-logs\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.050909 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.055899 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.055920 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-config-data\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.063385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.083508 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dkk4\" (UniqueName: \"kubernetes.io/projected/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-kube-api-access-4dkk4\") pod \"nova-api-0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.102654 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:49 crc kubenswrapper[4760]: W0123 18:23:49.538949 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7e833af_097b_4b0a_9bcd_3c6178f84bb0.slice/crio-aa72843779d2de9f2a4b2078f1577e474276fe8f512f71160c6dffab4143bf37 WatchSource:0}: Error finding container aa72843779d2de9f2a4b2078f1577e474276fe8f512f71160c6dffab4143bf37: Status 404 returned error can't find the container with id aa72843779d2de9f2a4b2078f1577e474276fe8f512f71160c6dffab4143bf37 Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.540812 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.612590 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3467c545-8724-467f-8ca6-8711a7307545" path="/var/lib/kubelet/pods/3467c545-8724-467f-8ca6-8711a7307545/volumes" Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.654791 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"127ea512-daf4-4310-b214-6ce12ba9adad","Type":"ContainerStarted","Data":"2c105ce41b18317f612ca83d24aac7fc32c45e35b9c011ed6097378a4c14519c"} Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.654860 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"127ea512-daf4-4310-b214-6ce12ba9adad","Type":"ContainerStarted","Data":"f6a4ea87cdd476e1377e6714d4de4c3a83825a3416607aad298d45c657cd6415"} Jan 23 18:23:49 crc kubenswrapper[4760]: I0123 18:23:49.656230 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7e833af-097b-4b0a-9bcd-3c6178f84bb0","Type":"ContainerStarted","Data":"aa72843779d2de9f2a4b2078f1577e474276fe8f512f71160c6dffab4143bf37"} Jan 23 18:23:50 crc kubenswrapper[4760]: I0123 18:23:50.675085 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"127ea512-daf4-4310-b214-6ce12ba9adad","Type":"ContainerStarted","Data":"3d0dded1d13fd4d3025916f62b73b555e265044d7a8bf6bb247b475f54988f81"} Jan 23 18:23:50 crc kubenswrapper[4760]: I0123 18:23:50.676524 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7e833af-097b-4b0a-9bcd-3c6178f84bb0","Type":"ContainerStarted","Data":"47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be"} Jan 23 18:23:50 crc kubenswrapper[4760]: I0123 18:23:50.676545 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7e833af-097b-4b0a-9bcd-3c6178f84bb0","Type":"ContainerStarted","Data":"43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3"} Jan 23 18:23:50 crc kubenswrapper[4760]: I0123 18:23:50.706955 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.706937716 podStartE2EDuration="2.706937716s" podCreationTimestamp="2026-01-23 18:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:50.698048157 +0000 UTC m=+1373.700506090" watchObservedRunningTime="2026-01-23 18:23:50.706937716 +0000 UTC m=+1373.709395649" Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.056352 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.135941 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2hx46"] Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.136173 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" podUID="2623ed5b-d192-4949-af6b-86b2250772da" containerName="dnsmasq-dns" containerID="cri-o://5ec79fda168453a382443eef65ebb4564b11c84026c33b2d6f23b9fbdbcec092" gracePeriod=10 Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.687702 4760 generic.go:334] "Generic (PLEG): container finished" podID="34a4fd76-73c7-4bec-bffe-0a8dbb15cf59" containerID="ea0873daf47c8302d4dd187b05afc8c846e5c646ed98de0a787d6749e372317c" exitCode=0 Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.688022 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lhxxf" event={"ID":"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59","Type":"ContainerDied","Data":"ea0873daf47c8302d4dd187b05afc8c846e5c646ed98de0a787d6749e372317c"} Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.690728 4760 generic.go:334] "Generic (PLEG): container finished" podID="2623ed5b-d192-4949-af6b-86b2250772da" containerID="5ec79fda168453a382443eef65ebb4564b11c84026c33b2d6f23b9fbdbcec092" exitCode=0 Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.691438 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" event={"ID":"2623ed5b-d192-4949-af6b-86b2250772da","Type":"ContainerDied","Data":"5ec79fda168453a382443eef65ebb4564b11c84026c33b2d6f23b9fbdbcec092"} Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.691465 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" event={"ID":"2623ed5b-d192-4949-af6b-86b2250772da","Type":"ContainerDied","Data":"36fdd4cd8dfd2e8e47dd9090447b40f0cfa41e3a523a3aaf4bc4f3a5ddcd2745"} Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.691476 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36fdd4cd8dfd2e8e47dd9090447b40f0cfa41e3a523a3aaf4bc4f3a5ddcd2745" Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.743666 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.897600 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-config\") pod \"2623ed5b-d192-4949-af6b-86b2250772da\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.897708 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/2623ed5b-d192-4949-af6b-86b2250772da-kube-api-access-jj26b\") pod \"2623ed5b-d192-4949-af6b-86b2250772da\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.897944 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-nb\") pod \"2623ed5b-d192-4949-af6b-86b2250772da\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.897989 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-sb\") pod \"2623ed5b-d192-4949-af6b-86b2250772da\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.898025 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-dns-svc\") pod \"2623ed5b-d192-4949-af6b-86b2250772da\" (UID: \"2623ed5b-d192-4949-af6b-86b2250772da\") " Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.902716 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2623ed5b-d192-4949-af6b-86b2250772da-kube-api-access-jj26b" (OuterVolumeSpecName: "kube-api-access-jj26b") pod "2623ed5b-d192-4949-af6b-86b2250772da" (UID: "2623ed5b-d192-4949-af6b-86b2250772da"). InnerVolumeSpecName "kube-api-access-jj26b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.938163 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2623ed5b-d192-4949-af6b-86b2250772da" (UID: "2623ed5b-d192-4949-af6b-86b2250772da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.948959 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-config" (OuterVolumeSpecName: "config") pod "2623ed5b-d192-4949-af6b-86b2250772da" (UID: "2623ed5b-d192-4949-af6b-86b2250772da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.957697 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2623ed5b-d192-4949-af6b-86b2250772da" (UID: "2623ed5b-d192-4949-af6b-86b2250772da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:23:51 crc kubenswrapper[4760]: I0123 18:23:51.973644 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2623ed5b-d192-4949-af6b-86b2250772da" (UID: "2623ed5b-d192-4949-af6b-86b2250772da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:23:52 crc kubenswrapper[4760]: I0123 18:23:52.000617 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:52 crc kubenswrapper[4760]: I0123 18:23:52.000667 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:52 crc kubenswrapper[4760]: I0123 18:23:52.000679 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:52 crc kubenswrapper[4760]: I0123 18:23:52.000687 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2623ed5b-d192-4949-af6b-86b2250772da-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:52 crc kubenswrapper[4760]: I0123 18:23:52.000697 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/2623ed5b-d192-4949-af6b-86b2250772da-kube-api-access-jj26b\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:52 crc kubenswrapper[4760]: I0123 18:23:52.705256 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"127ea512-daf4-4310-b214-6ce12ba9adad","Type":"ContainerStarted","Data":"9e7f2d50aa31fd4a62cf26a280283851d1d9b026a8dd316737a8ee7dfa7cfcae"} Jan 23 18:23:52 crc kubenswrapper[4760]: I0123 18:23:52.705313 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2hx46" Jan 23 18:23:52 crc kubenswrapper[4760]: I0123 18:23:52.777087 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.027448507 podStartE2EDuration="6.777064325s" podCreationTimestamp="2026-01-23 18:23:46 +0000 UTC" firstStartedPulling="2026-01-23 18:23:47.719904845 +0000 UTC m=+1370.722362778" lastFinishedPulling="2026-01-23 18:23:51.469520663 +0000 UTC m=+1374.471978596" observedRunningTime="2026-01-23 18:23:52.759094242 +0000 UTC m=+1375.761552175" watchObservedRunningTime="2026-01-23 18:23:52.777064325 +0000 UTC m=+1375.779522279" Jan 23 18:23:52 crc kubenswrapper[4760]: I0123 18:23:52.786886 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2hx46"] Jan 23 18:23:52 crc kubenswrapper[4760]: I0123 18:23:52.795112 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2hx46"] Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.083518 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.221715 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-config-data\") pod \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.221839 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x79g\" (UniqueName: \"kubernetes.io/projected/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-kube-api-access-7x79g\") pod \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.221894 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-combined-ca-bundle\") pod \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.221955 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-scripts\") pod \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\" (UID: \"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59\") " Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.231060 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-scripts" (OuterVolumeSpecName: "scripts") pod "34a4fd76-73c7-4bec-bffe-0a8dbb15cf59" (UID: "34a4fd76-73c7-4bec-bffe-0a8dbb15cf59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.231257 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-kube-api-access-7x79g" (OuterVolumeSpecName: "kube-api-access-7x79g") pod "34a4fd76-73c7-4bec-bffe-0a8dbb15cf59" (UID: "34a4fd76-73c7-4bec-bffe-0a8dbb15cf59"). InnerVolumeSpecName "kube-api-access-7x79g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.251747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-config-data" (OuterVolumeSpecName: "config-data") pod "34a4fd76-73c7-4bec-bffe-0a8dbb15cf59" (UID: "34a4fd76-73c7-4bec-bffe-0a8dbb15cf59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.254081 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34a4fd76-73c7-4bec-bffe-0a8dbb15cf59" (UID: "34a4fd76-73c7-4bec-bffe-0a8dbb15cf59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.323805 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.324000 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x79g\" (UniqueName: \"kubernetes.io/projected/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-kube-api-access-7x79g\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.324059 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.324112 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.633116 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2623ed5b-d192-4949-af6b-86b2250772da" path="/var/lib/kubelet/pods/2623ed5b-d192-4949-af6b-86b2250772da/volumes" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.713843 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lhxxf" event={"ID":"34a4fd76-73c7-4bec-bffe-0a8dbb15cf59","Type":"ContainerDied","Data":"08fd7dd9904c959eddac829a55a0db6f7355c82fea8a61d3c78cf6f256da4034"} Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.713891 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08fd7dd9904c959eddac829a55a0db6f7355c82fea8a61d3c78cf6f256da4034" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.713867 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lhxxf" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.714132 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.905899 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.906362 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="001bcc50-de00-4540-ac51-6e97dfe5606f" containerName="nova-scheduler-scheduler" containerID="cri-o://daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050" gracePeriod=30 Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.921462 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.921838 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" containerName="nova-api-api" containerID="cri-o://47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be" gracePeriod=30 Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.922304 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" containerName="nova-api-log" containerID="cri-o://43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3" gracePeriod=30 Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.956901 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.957326 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-log" containerID="cri-o://4163e0c4d5cf9828fc87b3239e0e9e0bff8edb612eca6869466e220b31fc3cf2" gracePeriod=30 Jan 23 18:23:53 crc kubenswrapper[4760]: I0123 18:23:53.957372 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-metadata" containerID="cri-o://88b837fd12ba1e81096628c654d99d4235f3a99d1ac638ff4859be86307cd623" gracePeriod=30 Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.537213 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.657759 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-combined-ca-bundle\") pod \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.657817 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-internal-tls-certs\") pod \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.657892 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-config-data\") pod \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.658006 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-public-tls-certs\") pod \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.658034 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-logs\") pod \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.658070 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dkk4\" (UniqueName: \"kubernetes.io/projected/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-kube-api-access-4dkk4\") pod \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\" (UID: \"c7e833af-097b-4b0a-9bcd-3c6178f84bb0\") " Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.658519 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-logs" (OuterVolumeSpecName: "logs") pod "c7e833af-097b-4b0a-9bcd-3c6178f84bb0" (UID: "c7e833af-097b-4b0a-9bcd-3c6178f84bb0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.659185 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.663766 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-kube-api-access-4dkk4" (OuterVolumeSpecName: "kube-api-access-4dkk4") pod "c7e833af-097b-4b0a-9bcd-3c6178f84bb0" (UID: "c7e833af-097b-4b0a-9bcd-3c6178f84bb0"). InnerVolumeSpecName "kube-api-access-4dkk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.686105 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e833af-097b-4b0a-9bcd-3c6178f84bb0" (UID: "c7e833af-097b-4b0a-9bcd-3c6178f84bb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.688223 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-config-data" (OuterVolumeSpecName: "config-data") pod "c7e833af-097b-4b0a-9bcd-3c6178f84bb0" (UID: "c7e833af-097b-4b0a-9bcd-3c6178f84bb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.709864 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7e833af-097b-4b0a-9bcd-3c6178f84bb0" (UID: "c7e833af-097b-4b0a-9bcd-3c6178f84bb0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.711383 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7e833af-097b-4b0a-9bcd-3c6178f84bb0" (UID: "c7e833af-097b-4b0a-9bcd-3c6178f84bb0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.715074 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.726929 4760 generic.go:334] "Generic (PLEG): container finished" podID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerID="4163e0c4d5cf9828fc87b3239e0e9e0bff8edb612eca6869466e220b31fc3cf2" exitCode=143 Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.727010 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5cddb8a6-4311-4f24-84b9-dac36bba5fe2","Type":"ContainerDied","Data":"4163e0c4d5cf9828fc87b3239e0e9e0bff8edb612eca6869466e220b31fc3cf2"} Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.728566 4760 generic.go:334] "Generic (PLEG): container finished" podID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" containerID="47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be" exitCode=0 Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.728584 4760 generic.go:334] "Generic (PLEG): container finished" podID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" containerID="43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3" exitCode=143 Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.728605 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.728626 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7e833af-097b-4b0a-9bcd-3c6178f84bb0","Type":"ContainerDied","Data":"47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be"} Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.728936 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7e833af-097b-4b0a-9bcd-3c6178f84bb0","Type":"ContainerDied","Data":"43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3"} Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.728955 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7e833af-097b-4b0a-9bcd-3c6178f84bb0","Type":"ContainerDied","Data":"aa72843779d2de9f2a4b2078f1577e474276fe8f512f71160c6dffab4143bf37"} Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.728970 4760 scope.go:117] "RemoveContainer" containerID="47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.729971 4760 generic.go:334] "Generic (PLEG): container finished" podID="001bcc50-de00-4540-ac51-6e97dfe5606f" containerID="daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050" exitCode=0 Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.730009 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.730086 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"001bcc50-de00-4540-ac51-6e97dfe5606f","Type":"ContainerDied","Data":"daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050"} Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.730182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"001bcc50-de00-4540-ac51-6e97dfe5606f","Type":"ContainerDied","Data":"e1266214b2936bd24233a4ca55127f6d9abe8486db5e80bd1e8a2f1e51d342e2"} Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.761428 4760 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.761455 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dkk4\" (UniqueName: \"kubernetes.io/projected/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-kube-api-access-4dkk4\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.761464 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.761473 4760 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.761481 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e833af-097b-4b0a-9bcd-3c6178f84bb0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.783200 4760 scope.go:117] "RemoveContainer" containerID="43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.786504 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.795944 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807182 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:54 crc kubenswrapper[4760]: E0123 18:23:54.807632 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2623ed5b-d192-4949-af6b-86b2250772da" containerName="init" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807644 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2623ed5b-d192-4949-af6b-86b2250772da" containerName="init" Jan 23 18:23:54 crc kubenswrapper[4760]: E0123 18:23:54.807656 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a4fd76-73c7-4bec-bffe-0a8dbb15cf59" containerName="nova-manage" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807662 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a4fd76-73c7-4bec-bffe-0a8dbb15cf59" containerName="nova-manage" Jan 23 18:23:54 crc kubenswrapper[4760]: E0123 18:23:54.807671 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001bcc50-de00-4540-ac51-6e97dfe5606f" containerName="nova-scheduler-scheduler" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807677 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="001bcc50-de00-4540-ac51-6e97dfe5606f" containerName="nova-scheduler-scheduler" Jan 23 18:23:54 crc kubenswrapper[4760]: E0123 18:23:54.807690 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" containerName="nova-api-log" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807697 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" containerName="nova-api-log" Jan 23 18:23:54 crc kubenswrapper[4760]: E0123 18:23:54.807710 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2623ed5b-d192-4949-af6b-86b2250772da" containerName="dnsmasq-dns" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807715 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2623ed5b-d192-4949-af6b-86b2250772da" containerName="dnsmasq-dns" Jan 23 18:23:54 crc kubenswrapper[4760]: E0123 18:23:54.807723 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" containerName="nova-api-api" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807729 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" containerName="nova-api-api" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807886 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" containerName="nova-api-log" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807898 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2623ed5b-d192-4949-af6b-86b2250772da" containerName="dnsmasq-dns" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807911 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a4fd76-73c7-4bec-bffe-0a8dbb15cf59" containerName="nova-manage" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807952 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" containerName="nova-api-api" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.807965 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="001bcc50-de00-4540-ac51-6e97dfe5606f" containerName="nova-scheduler-scheduler" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.808880 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.818220 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.818251 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.818583 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.827954 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.835381 4760 scope.go:117] "RemoveContainer" containerID="47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be" Jan 23 18:23:54 crc kubenswrapper[4760]: E0123 18:23:54.835849 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be\": container with ID starting with 47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be not found: ID does not exist" containerID="47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.835879 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be"} err="failed to get container status \"47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be\": rpc error: code = NotFound desc = could not find container \"47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be\": container with ID starting with 47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be not found: ID does not exist" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.835901 4760 scope.go:117] "RemoveContainer" containerID="43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3" Jan 23 18:23:54 crc kubenswrapper[4760]: E0123 18:23:54.836118 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3\": container with ID starting with 43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3 not found: ID does not exist" containerID="43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.836137 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3"} err="failed to get container status \"43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3\": rpc error: code = NotFound desc = could not find container \"43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3\": container with ID starting with 43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3 not found: ID does not exist" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.836149 4760 scope.go:117] "RemoveContainer" containerID="47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.836341 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be"} err="failed to get container status \"47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be\": rpc error: code = NotFound desc = could not find container \"47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be\": container with ID starting with 47b42d43b79fbf3eb38817f3279a145575272a6d049645aff7d7fabc16e319be not found: ID does not exist" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.836356 4760 scope.go:117] "RemoveContainer" containerID="43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.836531 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3"} err="failed to get container status \"43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3\": rpc error: code = NotFound desc = could not find container \"43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3\": container with ID starting with 43a0378307877a305a3f5f65caa66c47e1cb9fbff25c4a302d42987606b9f6f3 not found: ID does not exist" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.836547 4760 scope.go:117] "RemoveContainer" containerID="daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.859810 4760 scope.go:117] "RemoveContainer" containerID="daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050" Jan 23 18:23:54 crc kubenswrapper[4760]: E0123 18:23:54.860268 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050\": container with ID starting with daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050 not found: ID does not exist" containerID="daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.860295 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050"} err="failed to get container status \"daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050\": rpc error: code = NotFound desc = could not find container \"daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050\": container with ID starting with daec873db7e6c365c3c07aaa7a902334b76d1e6bc8444c99872b3700845b7050 not found: ID does not exist" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.862075 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84jj6\" (UniqueName: \"kubernetes.io/projected/001bcc50-de00-4540-ac51-6e97dfe5606f-kube-api-access-84jj6\") pod \"001bcc50-de00-4540-ac51-6e97dfe5606f\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.862202 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-config-data\") pod \"001bcc50-de00-4540-ac51-6e97dfe5606f\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.862299 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-combined-ca-bundle\") pod \"001bcc50-de00-4540-ac51-6e97dfe5606f\" (UID: \"001bcc50-de00-4540-ac51-6e97dfe5606f\") " Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.864934 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/001bcc50-de00-4540-ac51-6e97dfe5606f-kube-api-access-84jj6" (OuterVolumeSpecName: "kube-api-access-84jj6") pod "001bcc50-de00-4540-ac51-6e97dfe5606f" (UID: "001bcc50-de00-4540-ac51-6e97dfe5606f"). InnerVolumeSpecName "kube-api-access-84jj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.883259 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-config-data" (OuterVolumeSpecName: "config-data") pod "001bcc50-de00-4540-ac51-6e97dfe5606f" (UID: "001bcc50-de00-4540-ac51-6e97dfe5606f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.885218 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "001bcc50-de00-4540-ac51-6e97dfe5606f" (UID: "001bcc50-de00-4540-ac51-6e97dfe5606f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.965595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.965653 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddef27ef-a5e0-4045-9024-6710d89f194a-logs\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.965681 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-config-data\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.965751 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7m8p\" (UniqueName: \"kubernetes.io/projected/ddef27ef-a5e0-4045-9024-6710d89f194a-kube-api-access-l7m8p\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.965928 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.966086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-public-tls-certs\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.966225 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.966240 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84jj6\" (UniqueName: \"kubernetes.io/projected/001bcc50-de00-4540-ac51-6e97dfe5606f-kube-api-access-84jj6\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:54 crc kubenswrapper[4760]: I0123 18:23:54.966250 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001bcc50-de00-4540-ac51-6e97dfe5606f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.067653 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.067930 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddef27ef-a5e0-4045-9024-6710d89f194a-logs\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.067947 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-config-data\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.067971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7m8p\" (UniqueName: \"kubernetes.io/projected/ddef27ef-a5e0-4045-9024-6710d89f194a-kube-api-access-l7m8p\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.068015 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.068116 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-public-tls-certs\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.068898 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddef27ef-a5e0-4045-9024-6710d89f194a-logs\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.072140 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.072626 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-public-tls-certs\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.074948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-config-data\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.079160 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddef27ef-a5e0-4045-9024-6710d89f194a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.093049 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7m8p\" (UniqueName: \"kubernetes.io/projected/ddef27ef-a5e0-4045-9024-6710d89f194a-kube-api-access-l7m8p\") pod \"nova-api-0\" (UID: \"ddef27ef-a5e0-4045-9024-6710d89f194a\") " pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.141464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.266494 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.282645 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.295627 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.298092 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.300150 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.303183 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.380185 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc99d\" (UniqueName: \"kubernetes.io/projected/ed55273b-51cf-490f-80ec-003cd28fa749-kube-api-access-qc99d\") pod \"nova-scheduler-0\" (UID: \"ed55273b-51cf-490f-80ec-003cd28fa749\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.380312 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed55273b-51cf-490f-80ec-003cd28fa749-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed55273b-51cf-490f-80ec-003cd28fa749\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.380340 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed55273b-51cf-490f-80ec-003cd28fa749-config-data\") pod \"nova-scheduler-0\" (UID: \"ed55273b-51cf-490f-80ec-003cd28fa749\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.482472 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed55273b-51cf-490f-80ec-003cd28fa749-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed55273b-51cf-490f-80ec-003cd28fa749\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.482512 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed55273b-51cf-490f-80ec-003cd28fa749-config-data\") pod \"nova-scheduler-0\" (UID: \"ed55273b-51cf-490f-80ec-003cd28fa749\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.482579 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc99d\" (UniqueName: \"kubernetes.io/projected/ed55273b-51cf-490f-80ec-003cd28fa749-kube-api-access-qc99d\") pod \"nova-scheduler-0\" (UID: \"ed55273b-51cf-490f-80ec-003cd28fa749\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.488511 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed55273b-51cf-490f-80ec-003cd28fa749-config-data\") pod \"nova-scheduler-0\" (UID: \"ed55273b-51cf-490f-80ec-003cd28fa749\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.494956 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed55273b-51cf-490f-80ec-003cd28fa749-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed55273b-51cf-490f-80ec-003cd28fa749\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.497316 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc99d\" (UniqueName: \"kubernetes.io/projected/ed55273b-51cf-490f-80ec-003cd28fa749-kube-api-access-qc99d\") pod \"nova-scheduler-0\" (UID: \"ed55273b-51cf-490f-80ec-003cd28fa749\") " pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.606121 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="001bcc50-de00-4540-ac51-6e97dfe5606f" path="/var/lib/kubelet/pods/001bcc50-de00-4540-ac51-6e97dfe5606f/volumes" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.607090 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e833af-097b-4b0a-9bcd-3c6178f84bb0" path="/var/lib/kubelet/pods/c7e833af-097b-4b0a-9bcd-3c6178f84bb0/volumes" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.621335 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.635940 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:23:55 crc kubenswrapper[4760]: W0123 18:23:55.639276 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddef27ef_a5e0_4045_9024_6710d89f194a.slice/crio-9ae91ad665a780c4c1741356792199563fbc3d97ffe283cc704c823b0d872193 WatchSource:0}: Error finding container 9ae91ad665a780c4c1741356792199563fbc3d97ffe283cc704c823b0d872193: Status 404 returned error can't find the container with id 9ae91ad665a780c4c1741356792199563fbc3d97ffe283cc704c823b0d872193 Jan 23 18:23:55 crc kubenswrapper[4760]: I0123 18:23:55.744015 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ddef27ef-a5e0-4045-9024-6710d89f194a","Type":"ContainerStarted","Data":"9ae91ad665a780c4c1741356792199563fbc3d97ffe283cc704c823b0d872193"} Jan 23 18:23:56 crc kubenswrapper[4760]: I0123 18:23:56.053956 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:23:56 crc kubenswrapper[4760]: W0123 18:23:56.074295 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded55273b_51cf_490f_80ec_003cd28fa749.slice/crio-06f93245249679403b6e8db6742bd7a58eb5b22f264c32050b9d75e34489650e WatchSource:0}: Error finding container 06f93245249679403b6e8db6742bd7a58eb5b22f264c32050b9d75e34489650e: Status 404 returned error can't find the container with id 06f93245249679403b6e8db6742bd7a58eb5b22f264c32050b9d75e34489650e Jan 23 18:23:56 crc kubenswrapper[4760]: I0123 18:23:56.757970 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ddef27ef-a5e0-4045-9024-6710d89f194a","Type":"ContainerStarted","Data":"2c1823b21ad7de2a53c5408a321774f339a19e61f270ce265cba0f09f818c90b"} Jan 23 18:23:56 crc kubenswrapper[4760]: I0123 18:23:56.761765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ed55273b-51cf-490f-80ec-003cd28fa749","Type":"ContainerStarted","Data":"06f93245249679403b6e8db6742bd7a58eb5b22f264c32050b9d75e34489650e"} Jan 23 18:23:57 crc kubenswrapper[4760]: I0123 18:23:57.110789 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:50698->10.217.0.178:8775: read: connection reset by peer" Jan 23 18:23:57 crc kubenswrapper[4760]: I0123 18:23:57.111365 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:50688->10.217.0.178:8775: read: connection reset by peer" Jan 23 18:23:59 crc kubenswrapper[4760]: I0123 18:23:59.793821 4760 generic.go:334] "Generic (PLEG): container finished" podID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerID="88b837fd12ba1e81096628c654d99d4235f3a99d1ac638ff4859be86307cd623" exitCode=0 Jan 23 18:23:59 crc kubenswrapper[4760]: I0123 18:23:59.793916 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5cddb8a6-4311-4f24-84b9-dac36bba5fe2","Type":"ContainerDied","Data":"88b837fd12ba1e81096628c654d99d4235f3a99d1ac638ff4859be86307cd623"} Jan 23 18:23:59 crc kubenswrapper[4760]: I0123 18:23:59.799363 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ddef27ef-a5e0-4045-9024-6710d89f194a","Type":"ContainerStarted","Data":"16aafcf1311ca2adbb456a9447951916c99b4cf40643ae3a409febb089559a32"} Jan 23 18:23:59 crc kubenswrapper[4760]: I0123 18:23:59.803429 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ed55273b-51cf-490f-80ec-003cd28fa749","Type":"ContainerStarted","Data":"2b4e5b8bc53c28b4d0bfe8ef5cff18c01a580b6b409a29664c9b14c730951a46"} Jan 23 18:23:59 crc kubenswrapper[4760]: I0123 18:23:59.835199 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.835174287 podStartE2EDuration="5.835174287s" podCreationTimestamp="2026-01-23 18:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:59.828951283 +0000 UTC m=+1382.831409256" watchObservedRunningTime="2026-01-23 18:23:59.835174287 +0000 UTC m=+1382.837632260" Jan 23 18:23:59 crc kubenswrapper[4760]: I0123 18:23:59.859064 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.8590290849999995 podStartE2EDuration="4.859029085s" podCreationTimestamp="2026-01-23 18:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:59.851981988 +0000 UTC m=+1382.854439951" watchObservedRunningTime="2026-01-23 18:23:59.859029085 +0000 UTC m=+1382.861487058" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.240978 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.394029 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-logs\") pod \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.394126 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-nova-metadata-tls-certs\") pod \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.394186 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd2zk\" (UniqueName: \"kubernetes.io/projected/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-kube-api-access-hd2zk\") pod \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.394268 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-combined-ca-bundle\") pod \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.394337 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-config-data\") pod \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\" (UID: \"5cddb8a6-4311-4f24-84b9-dac36bba5fe2\") " Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.394538 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-logs" (OuterVolumeSpecName: "logs") pod "5cddb8a6-4311-4f24-84b9-dac36bba5fe2" (UID: "5cddb8a6-4311-4f24-84b9-dac36bba5fe2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.395099 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.401958 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-kube-api-access-hd2zk" (OuterVolumeSpecName: "kube-api-access-hd2zk") pod "5cddb8a6-4311-4f24-84b9-dac36bba5fe2" (UID: "5cddb8a6-4311-4f24-84b9-dac36bba5fe2"). InnerVolumeSpecName "kube-api-access-hd2zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.420523 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-config-data" (OuterVolumeSpecName: "config-data") pod "5cddb8a6-4311-4f24-84b9-dac36bba5fe2" (UID: "5cddb8a6-4311-4f24-84b9-dac36bba5fe2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.428348 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cddb8a6-4311-4f24-84b9-dac36bba5fe2" (UID: "5cddb8a6-4311-4f24-84b9-dac36bba5fe2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.458257 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5cddb8a6-4311-4f24-84b9-dac36bba5fe2" (UID: "5cddb8a6-4311-4f24-84b9-dac36bba5fe2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.496824 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.496861 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd2zk\" (UniqueName: \"kubernetes.io/projected/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-kube-api-access-hd2zk\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.496872 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.496883 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cddb8a6-4311-4f24-84b9-dac36bba5fe2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.621801 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.814727 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.814699 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5cddb8a6-4311-4f24-84b9-dac36bba5fe2","Type":"ContainerDied","Data":"5efb87e8e3196c4e20db2b3824deb32a0a518dcf60b9c6a6c3ca31e5d61a9b12"} Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.816592 4760 scope.go:117] "RemoveContainer" containerID="88b837fd12ba1e81096628c654d99d4235f3a99d1ac638ff4859be86307cd623" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.861254 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.870121 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.873303 4760 scope.go:117] "RemoveContainer" containerID="4163e0c4d5cf9828fc87b3239e0e9e0bff8edb612eca6869466e220b31fc3cf2" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.885487 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:24:00 crc kubenswrapper[4760]: E0123 18:24:00.886091 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-metadata" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.886125 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-metadata" Jan 23 18:24:00 crc kubenswrapper[4760]: E0123 18:24:00.886172 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-log" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.886184 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-log" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.886549 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-log" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.886583 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" containerName="nova-metadata-metadata" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.888262 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.893500 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.895752 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 18:24:00 crc kubenswrapper[4760]: I0123 18:24:00.918571 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.005851 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e938088-0a8b-43ef-8e83-e752649de48d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.005963 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e938088-0a8b-43ef-8e83-e752649de48d-config-data\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.006106 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e938088-0a8b-43ef-8e83-e752649de48d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.006359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e938088-0a8b-43ef-8e83-e752649de48d-logs\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.006640 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n4nf\" (UniqueName: \"kubernetes.io/projected/2e938088-0a8b-43ef-8e83-e752649de48d-kube-api-access-8n4nf\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.108585 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n4nf\" (UniqueName: \"kubernetes.io/projected/2e938088-0a8b-43ef-8e83-e752649de48d-kube-api-access-8n4nf\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.108634 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e938088-0a8b-43ef-8e83-e752649de48d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.108697 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e938088-0a8b-43ef-8e83-e752649de48d-config-data\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.108722 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e938088-0a8b-43ef-8e83-e752649de48d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.108837 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e938088-0a8b-43ef-8e83-e752649de48d-logs\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.109330 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e938088-0a8b-43ef-8e83-e752649de48d-logs\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.114212 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e938088-0a8b-43ef-8e83-e752649de48d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.114580 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e938088-0a8b-43ef-8e83-e752649de48d-config-data\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.130893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e938088-0a8b-43ef-8e83-e752649de48d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.137699 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n4nf\" (UniqueName: \"kubernetes.io/projected/2e938088-0a8b-43ef-8e83-e752649de48d-kube-api-access-8n4nf\") pod \"nova-metadata-0\" (UID: \"2e938088-0a8b-43ef-8e83-e752649de48d\") " pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.220989 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.608273 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cddb8a6-4311-4f24-84b9-dac36bba5fe2" path="/var/lib/kubelet/pods/5cddb8a6-4311-4f24-84b9-dac36bba5fe2/volumes" Jan 23 18:24:01 crc kubenswrapper[4760]: W0123 18:24:01.703647 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e938088_0a8b_43ef_8e83_e752649de48d.slice/crio-a5fe77e8d0c9f972163f0179363051ef813ac557644fefabd254b20d7923af0c WatchSource:0}: Error finding container a5fe77e8d0c9f972163f0179363051ef813ac557644fefabd254b20d7923af0c: Status 404 returned error can't find the container with id a5fe77e8d0c9f972163f0179363051ef813ac557644fefabd254b20d7923af0c Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.707439 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:24:01 crc kubenswrapper[4760]: I0123 18:24:01.824521 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e938088-0a8b-43ef-8e83-e752649de48d","Type":"ContainerStarted","Data":"a5fe77e8d0c9f972163f0179363051ef813ac557644fefabd254b20d7923af0c"} Jan 23 18:24:02 crc kubenswrapper[4760]: I0123 18:24:02.836375 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e938088-0a8b-43ef-8e83-e752649de48d","Type":"ContainerStarted","Data":"9c85df31450e4a57c4b5a184ed086fc90ded3dc3746a336352cb6bc92c299f89"} Jan 23 18:24:02 crc kubenswrapper[4760]: I0123 18:24:02.836742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e938088-0a8b-43ef-8e83-e752649de48d","Type":"ContainerStarted","Data":"1f3b8114b9b68e328dff4fb47d2c93b50dfe2e8abd5b114e9776c599090c02cd"} Jan 23 18:24:02 crc kubenswrapper[4760]: I0123 18:24:02.856198 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.85617756 podStartE2EDuration="2.85617756s" podCreationTimestamp="2026-01-23 18:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:24:02.854752471 +0000 UTC m=+1385.857210404" watchObservedRunningTime="2026-01-23 18:24:02.85617756 +0000 UTC m=+1385.858635513" Jan 23 18:24:05 crc kubenswrapper[4760]: I0123 18:24:05.142556 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:24:05 crc kubenswrapper[4760]: I0123 18:24:05.142904 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:24:05 crc kubenswrapper[4760]: I0123 18:24:05.622216 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 18:24:05 crc kubenswrapper[4760]: I0123 18:24:05.658485 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 18:24:05 crc kubenswrapper[4760]: I0123 18:24:05.900352 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 18:24:06 crc kubenswrapper[4760]: I0123 18:24:06.156663 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ddef27ef-a5e0-4045-9024-6710d89f194a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:24:06 crc kubenswrapper[4760]: I0123 18:24:06.156647 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ddef27ef-a5e0-4045-9024-6710d89f194a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:24:06 crc kubenswrapper[4760]: I0123 18:24:06.221248 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:24:06 crc kubenswrapper[4760]: I0123 18:24:06.221382 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:24:11 crc kubenswrapper[4760]: I0123 18:24:11.221882 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 18:24:11 crc kubenswrapper[4760]: I0123 18:24:11.222638 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 18:24:12 crc kubenswrapper[4760]: I0123 18:24:12.235683 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2e938088-0a8b-43ef-8e83-e752649de48d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:24:12 crc kubenswrapper[4760]: I0123 18:24:12.235699 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2e938088-0a8b-43ef-8e83-e752649de48d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:24:15 crc kubenswrapper[4760]: I0123 18:24:15.154298 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 18:24:15 crc kubenswrapper[4760]: I0123 18:24:15.155547 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 18:24:15 crc kubenswrapper[4760]: I0123 18:24:15.156575 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 18:24:15 crc kubenswrapper[4760]: I0123 18:24:15.168188 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 18:24:15 crc kubenswrapper[4760]: I0123 18:24:15.959906 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 18:24:15 crc kubenswrapper[4760]: I0123 18:24:15.968087 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 18:24:16 crc kubenswrapper[4760]: I0123 18:24:16.075786 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:24:16 crc kubenswrapper[4760]: I0123 18:24:16.075857 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:24:17 crc kubenswrapper[4760]: I0123 18:24:17.207514 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 18:24:21 crc kubenswrapper[4760]: I0123 18:24:21.231082 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 18:24:21 crc kubenswrapper[4760]: I0123 18:24:21.233070 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 18:24:21 crc kubenswrapper[4760]: I0123 18:24:21.244881 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 18:24:22 crc kubenswrapper[4760]: I0123 18:24:22.035830 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 18:24:31 crc kubenswrapper[4760]: I0123 18:24:31.365294 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:24:32 crc kubenswrapper[4760]: I0123 18:24:32.316855 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:24:35 crc kubenswrapper[4760]: I0123 18:24:35.343194 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="00d4b29c-f0c7-4d78-9db9-72e58e26360a" containerName="rabbitmq" containerID="cri-o://56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3" gracePeriod=604797 Jan 23 18:24:36 crc kubenswrapper[4760]: I0123 18:24:36.379243 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="916c4314-f639-42ce-9c84-48c7b1c4df05" containerName="rabbitmq" containerID="cri-o://2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982" gracePeriod=604796 Jan 23 18:24:36 crc kubenswrapper[4760]: I0123 18:24:36.864749 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7bmkv"] Jan 23 18:24:36 crc kubenswrapper[4760]: I0123 18:24:36.867574 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:36 crc kubenswrapper[4760]: I0123 18:24:36.885776 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7bmkv"] Jan 23 18:24:36 crc kubenswrapper[4760]: I0123 18:24:36.979863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-utilities\") pod \"redhat-operators-7bmkv\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:36 crc kubenswrapper[4760]: I0123 18:24:36.980091 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xllf\" (UniqueName: \"kubernetes.io/projected/765a5365-6c1a-494c-b4a6-28998cd10df5-kube-api-access-8xllf\") pod \"redhat-operators-7bmkv\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:36 crc kubenswrapper[4760]: I0123 18:24:36.980261 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-catalog-content\") pod \"redhat-operators-7bmkv\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:37 crc kubenswrapper[4760]: I0123 18:24:37.082303 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-catalog-content\") pod \"redhat-operators-7bmkv\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:37 crc kubenswrapper[4760]: I0123 18:24:37.082434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-utilities\") pod \"redhat-operators-7bmkv\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:37 crc kubenswrapper[4760]: I0123 18:24:37.082479 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xllf\" (UniqueName: \"kubernetes.io/projected/765a5365-6c1a-494c-b4a6-28998cd10df5-kube-api-access-8xllf\") pod \"redhat-operators-7bmkv\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:37 crc kubenswrapper[4760]: I0123 18:24:37.083304 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-catalog-content\") pod \"redhat-operators-7bmkv\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:37 crc kubenswrapper[4760]: I0123 18:24:37.083457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-utilities\") pod \"redhat-operators-7bmkv\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:37 crc kubenswrapper[4760]: I0123 18:24:37.103642 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xllf\" (UniqueName: \"kubernetes.io/projected/765a5365-6c1a-494c-b4a6-28998cd10df5-kube-api-access-8xllf\") pod \"redhat-operators-7bmkv\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:37 crc kubenswrapper[4760]: I0123 18:24:37.188221 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:37 crc kubenswrapper[4760]: I0123 18:24:37.663832 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7bmkv"] Jan 23 18:24:38 crc kubenswrapper[4760]: I0123 18:24:38.195957 4760 generic.go:334] "Generic (PLEG): container finished" podID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerID="722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e" exitCode=0 Jan 23 18:24:38 crc kubenswrapper[4760]: I0123 18:24:38.196185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmkv" event={"ID":"765a5365-6c1a-494c-b4a6-28998cd10df5","Type":"ContainerDied","Data":"722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e"} Jan 23 18:24:38 crc kubenswrapper[4760]: I0123 18:24:38.196323 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmkv" event={"ID":"765a5365-6c1a-494c-b4a6-28998cd10df5","Type":"ContainerStarted","Data":"14cac6ef3911c34f129fbd08c2e466b0157b65d7a2a950cf68e7acd1610647fc"} Jan 23 18:24:40 crc kubenswrapper[4760]: I0123 18:24:40.212517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmkv" event={"ID":"765a5365-6c1a-494c-b4a6-28998cd10df5","Type":"ContainerStarted","Data":"9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd"} Jan 23 18:24:41 crc kubenswrapper[4760]: I0123 18:24:41.226230 4760 generic.go:334] "Generic (PLEG): container finished" podID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerID="9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd" exitCode=0 Jan 23 18:24:41 crc kubenswrapper[4760]: I0123 18:24:41.226342 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmkv" event={"ID":"765a5365-6c1a-494c-b4a6-28998cd10df5","Type":"ContainerDied","Data":"9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd"} Jan 23 18:24:41 crc kubenswrapper[4760]: I0123 18:24:41.569259 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="00d4b29c-f0c7-4d78-9db9-72e58e26360a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 23 18:24:41 crc kubenswrapper[4760]: I0123 18:24:41.636298 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="916c4314-f639-42ce-9c84-48c7b1c4df05" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 23 18:24:44 crc kubenswrapper[4760]: I0123 18:24:44.254796 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmkv" event={"ID":"765a5365-6c1a-494c-b4a6-28998cd10df5","Type":"ContainerStarted","Data":"8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324"} Jan 23 18:24:44 crc kubenswrapper[4760]: I0123 18:24:44.271530 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7bmkv" podStartSLOduration=3.084829125 podStartE2EDuration="8.271510336s" podCreationTimestamp="2026-01-23 18:24:36 +0000 UTC" firstStartedPulling="2026-01-23 18:24:38.197629895 +0000 UTC m=+1421.200087828" lastFinishedPulling="2026-01-23 18:24:43.384311066 +0000 UTC m=+1426.386769039" observedRunningTime="2026-01-23 18:24:44.270396565 +0000 UTC m=+1427.272854518" watchObservedRunningTime="2026-01-23 18:24:44.271510336 +0000 UTC m=+1427.273968259" Jan 23 18:24:44 crc kubenswrapper[4760]: I0123 18:24:44.928683 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:24:44 crc kubenswrapper[4760]: I0123 18:24:44.936318 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.032426 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-confd\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.032501 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-config-data\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.032546 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-tls\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.032588 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqk92\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-kube-api-access-bqk92\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.032644 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-server-conf\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.032669 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-erlang-cookie\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.032710 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-plugins\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.032777 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.032951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-tls\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033274 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fj9w\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-kube-api-access-9fj9w\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033711 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00d4b29c-f0c7-4d78-9db9-72e58e26360a-pod-info\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033743 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-server-conf\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033783 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-plugins-conf\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033803 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033823 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/916c4314-f639-42ce-9c84-48c7b1c4df05-pod-info\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033849 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-confd\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033878 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/916c4314-f639-42ce-9c84-48c7b1c4df05-erlang-cookie-secret\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033960 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00d4b29c-f0c7-4d78-9db9-72e58e26360a-erlang-cookie-secret\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.033997 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-config-data\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.034024 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-plugins-conf\") pod \"916c4314-f639-42ce-9c84-48c7b1c4df05\" (UID: \"916c4314-f639-42ce-9c84-48c7b1c4df05\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.034060 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-erlang-cookie\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.034098 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-plugins\") pod \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\" (UID: \"00d4b29c-f0c7-4d78-9db9-72e58e26360a\") " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.034585 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.035217 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.038353 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.038758 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.044208 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.044335 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.044729 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.046586 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.050129 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-kube-api-access-9fj9w" (OuterVolumeSpecName: "kube-api-access-9fj9w") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "kube-api-access-9fj9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.050615 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-kube-api-access-bqk92" (OuterVolumeSpecName: "kube-api-access-bqk92") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "kube-api-access-bqk92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.052994 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/00d4b29c-f0c7-4d78-9db9-72e58e26360a-pod-info" (OuterVolumeSpecName: "pod-info") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.087872 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916c4314-f639-42ce-9c84-48c7b1c4df05-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.087890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/916c4314-f639-42ce-9c84-48c7b1c4df05-pod-info" (OuterVolumeSpecName: "pod-info") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.087973 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.088533 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.090553 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d4b29c-f0c7-4d78-9db9-72e58e26360a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.138017 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.138543 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqk92\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-kube-api-access-bqk92\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.138661 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.138749 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.138819 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.138912 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fj9w\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-kube-api-access-9fj9w\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.140372 4760 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00d4b29c-f0c7-4d78-9db9-72e58e26360a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.140487 4760 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.140572 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.140669 4760 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/916c4314-f639-42ce-9c84-48c7b1c4df05-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.140746 4760 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/916c4314-f639-42ce-9c84-48c7b1c4df05-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.140823 4760 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00d4b29c-f0c7-4d78-9db9-72e58e26360a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.140897 4760 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.140966 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.141068 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.173747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-config-data" (OuterVolumeSpecName: "config-data") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.221667 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-config-data" (OuterVolumeSpecName: "config-data") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.247974 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.248004 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.283060 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.286559 4760 generic.go:334] "Generic (PLEG): container finished" podID="916c4314-f639-42ce-9c84-48c7b1c4df05" containerID="2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982" exitCode=0 Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.286633 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"916c4314-f639-42ce-9c84-48c7b1c4df05","Type":"ContainerDied","Data":"2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982"} Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.286661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"916c4314-f639-42ce-9c84-48c7b1c4df05","Type":"ContainerDied","Data":"233d8bc075dd3d137f05a5a93663dcaf4aae6bf5c3b7048f2164cb487fe3f6d5"} Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.286676 4760 scope.go:117] "RemoveContainer" containerID="2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.286822 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.294307 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.322058 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-server-conf" (OuterVolumeSpecName: "server-conf") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.322054 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-server-conf" (OuterVolumeSpecName: "server-conf") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.327361 4760 generic.go:334] "Generic (PLEG): container finished" podID="00d4b29c-f0c7-4d78-9db9-72e58e26360a" containerID="56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3" exitCode=0 Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.327871 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.328362 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00d4b29c-f0c7-4d78-9db9-72e58e26360a","Type":"ContainerDied","Data":"56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3"} Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.328426 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00d4b29c-f0c7-4d78-9db9-72e58e26360a","Type":"ContainerDied","Data":"07ab51da57f13d31f7fcf7a11f409fcc6717f121ffe9d7bab4997d6baf0a1463"} Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.350089 4760 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/916c4314-f639-42ce-9c84-48c7b1c4df05-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.350123 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.350133 4760 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00d4b29c-f0c7-4d78-9db9-72e58e26360a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.350141 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.384724 4760 scope.go:117] "RemoveContainer" containerID="a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.418019 4760 scope.go:117] "RemoveContainer" containerID="2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982" Jan 23 18:24:45 crc kubenswrapper[4760]: E0123 18:24:45.418456 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982\": container with ID starting with 2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982 not found: ID does not exist" containerID="2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.418489 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982"} err="failed to get container status \"2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982\": rpc error: code = NotFound desc = could not find container \"2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982\": container with ID starting with 2e6cd43b5c80fe615fbfad80734ab9f8fe293400436ecfb0ef7a8fb31b027982 not found: ID does not exist" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.418510 4760 scope.go:117] "RemoveContainer" containerID="a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947" Jan 23 18:24:45 crc kubenswrapper[4760]: E0123 18:24:45.419577 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947\": container with ID starting with a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947 not found: ID does not exist" containerID="a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.419604 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947"} err="failed to get container status \"a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947\": rpc error: code = NotFound desc = could not find container \"a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947\": container with ID starting with a0e1b2c66b77d478ed375309912608fe0241065012cde13af207b639ffe91947 not found: ID does not exist" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.419617 4760 scope.go:117] "RemoveContainer" containerID="56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.438534 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "00d4b29c-f0c7-4d78-9db9-72e58e26360a" (UID: "00d4b29c-f0c7-4d78-9db9-72e58e26360a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.441537 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "916c4314-f639-42ce-9c84-48c7b1c4df05" (UID: "916c4314-f639-42ce-9c84-48c7b1c4df05"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.441594 4760 scope.go:117] "RemoveContainer" containerID="f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.451957 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/916c4314-f639-42ce-9c84-48c7b1c4df05-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.451994 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00d4b29c-f0c7-4d78-9db9-72e58e26360a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.463617 4760 scope.go:117] "RemoveContainer" containerID="56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3" Jan 23 18:24:45 crc kubenswrapper[4760]: E0123 18:24:45.464089 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3\": container with ID starting with 56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3 not found: ID does not exist" containerID="56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.464133 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3"} err="failed to get container status \"56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3\": rpc error: code = NotFound desc = could not find container \"56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3\": container with ID starting with 56d364585f714cf61153bea20b590cafafddbc7001c06e222d9562433dd95db3 not found: ID does not exist" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.464164 4760 scope.go:117] "RemoveContainer" containerID="f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90" Jan 23 18:24:45 crc kubenswrapper[4760]: E0123 18:24:45.464606 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90\": container with ID starting with f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90 not found: ID does not exist" containerID="f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.464642 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90"} err="failed to get container status \"f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90\": rpc error: code = NotFound desc = could not find container \"f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90\": container with ID starting with f637301f896b6ff2e82e421559e0ff19bc7812800bb780f9b3ed85ce8d0abe90 not found: ID does not exist" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.620042 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.627609 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.647033 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:24:45 crc kubenswrapper[4760]: E0123 18:24:45.647377 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916c4314-f639-42ce-9c84-48c7b1c4df05" containerName="rabbitmq" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.647387 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="916c4314-f639-42ce-9c84-48c7b1c4df05" containerName="rabbitmq" Jan 23 18:24:45 crc kubenswrapper[4760]: E0123 18:24:45.647419 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d4b29c-f0c7-4d78-9db9-72e58e26360a" containerName="setup-container" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.647426 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d4b29c-f0c7-4d78-9db9-72e58e26360a" containerName="setup-container" Jan 23 18:24:45 crc kubenswrapper[4760]: E0123 18:24:45.647439 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d4b29c-f0c7-4d78-9db9-72e58e26360a" containerName="rabbitmq" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.647445 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d4b29c-f0c7-4d78-9db9-72e58e26360a" containerName="rabbitmq" Jan 23 18:24:45 crc kubenswrapper[4760]: E0123 18:24:45.647454 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916c4314-f639-42ce-9c84-48c7b1c4df05" containerName="setup-container" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.647460 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="916c4314-f639-42ce-9c84-48c7b1c4df05" containerName="setup-container" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.647624 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d4b29c-f0c7-4d78-9db9-72e58e26360a" containerName="rabbitmq" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.647638 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="916c4314-f639-42ce-9c84-48c7b1c4df05" containerName="rabbitmq" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.648485 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.658518 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.659309 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.659521 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.659734 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l4vzp" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.660153 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.660304 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.660463 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.670453 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.706507 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.714506 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.739509 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.741163 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.743075 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.744225 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.744770 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.745660 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.745718 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.745890 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kzn5r" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.747488 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.749676 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.764539 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.764591 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpsgg\" (UniqueName: \"kubernetes.io/projected/1c1aa6a7-0392-4091-b65f-69e5e224288c-kube-api-access-zpsgg\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.764771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.764825 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.764888 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c1aa6a7-0392-4091-b65f-69e5e224288c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.764944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.764989 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765044 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765090 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765152 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k5tg\" (UniqueName: \"kubernetes.io/projected/108fc09d-d5b2-41bc-b2dd-f2edb1847366-kube-api-access-2k5tg\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765184 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/108fc09d-d5b2-41bc-b2dd-f2edb1847366-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765288 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c1aa6a7-0392-4091-b65f-69e5e224288c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c1aa6a7-0392-4091-b65f-69e5e224288c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765397 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/108fc09d-d5b2-41bc-b2dd-f2edb1847366-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765609 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765683 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/108fc09d-d5b2-41bc-b2dd-f2edb1847366-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765746 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/108fc09d-d5b2-41bc-b2dd-f2edb1847366-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c1aa6a7-0392-4091-b65f-69e5e224288c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765804 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c1aa6a7-0392-4091-b65f-69e5e224288c-config-data\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.765827 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/108fc09d-d5b2-41bc-b2dd-f2edb1847366-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.867109 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.867171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/108fc09d-d5b2-41bc-b2dd-f2edb1847366-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.867214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c1aa6a7-0392-4091-b65f-69e5e224288c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.867236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c1aa6a7-0392-4091-b65f-69e5e224288c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.867289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/108fc09d-d5b2-41bc-b2dd-f2edb1847366-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.867313 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.868338 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/108fc09d-d5b2-41bc-b2dd-f2edb1847366-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.868385 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.868438 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/108fc09d-d5b2-41bc-b2dd-f2edb1847366-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.868462 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/108fc09d-d5b2-41bc-b2dd-f2edb1847366-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.868860 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c1aa6a7-0392-4091-b65f-69e5e224288c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.868888 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c1aa6a7-0392-4091-b65f-69e5e224288c-config-data\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.869707 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/108fc09d-d5b2-41bc-b2dd-f2edb1847366-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.869752 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c1aa6a7-0392-4091-b65f-69e5e224288c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.869838 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/108fc09d-d5b2-41bc-b2dd-f2edb1847366-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.869905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.870072 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c1aa6a7-0392-4091-b65f-69e5e224288c-config-data\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.870301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.870857 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/108fc09d-d5b2-41bc-b2dd-f2edb1847366-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.870958 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpsgg\" (UniqueName: \"kubernetes.io/projected/1c1aa6a7-0392-4091-b65f-69e5e224288c-kube-api-access-zpsgg\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.871352 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.871384 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.871450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c1aa6a7-0392-4091-b65f-69e5e224288c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.871503 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.871530 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.871576 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.871608 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.871653 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k5tg\" (UniqueName: \"kubernetes.io/projected/108fc09d-d5b2-41bc-b2dd-f2edb1847366-kube-api-access-2k5tg\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.872425 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.872545 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.872944 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.874278 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.874543 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.874634 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.875094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/108fc09d-d5b2-41bc-b2dd-f2edb1847366-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.875970 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c1aa6a7-0392-4091-b65f-69e5e224288c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.876309 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c1aa6a7-0392-4091-b65f-69e5e224288c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.876487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/108fc09d-d5b2-41bc-b2dd-f2edb1847366-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.876810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.877401 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c1aa6a7-0392-4091-b65f-69e5e224288c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.888038 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c1aa6a7-0392-4091-b65f-69e5e224288c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.888665 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpsgg\" (UniqueName: \"kubernetes.io/projected/1c1aa6a7-0392-4091-b65f-69e5e224288c-kube-api-access-zpsgg\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.892306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k5tg\" (UniqueName: \"kubernetes.io/projected/108fc09d-d5b2-41bc-b2dd-f2edb1847366-kube-api-access-2k5tg\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.893827 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/108fc09d-d5b2-41bc-b2dd-f2edb1847366-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.904309 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"108fc09d-d5b2-41bc-b2dd-f2edb1847366\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.916552 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1c1aa6a7-0392-4091-b65f-69e5e224288c\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:45 crc kubenswrapper[4760]: I0123 18:24:45.979560 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.060062 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.079577 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.079633 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.079686 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.080477 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d10a6c9cff1cc06cc9d41f66b241c8a85945eae00b182bb02ef5740c10c61491"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.080548 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://d10a6c9cff1cc06cc9d41f66b241c8a85945eae00b182bb02ef5740c10c61491" gracePeriod=600 Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.339739 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="d10a6c9cff1cc06cc9d41f66b241c8a85945eae00b182bb02ef5740c10c61491" exitCode=0 Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.339926 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"d10a6c9cff1cc06cc9d41f66b241c8a85945eae00b182bb02ef5740c10c61491"} Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.340108 4760 scope.go:117] "RemoveContainer" containerID="6f55d0f48ab5f20742a6157a2f638d64038b9a8ba0a7914e72dac7dd13e1a1c1" Jan 23 18:24:46 crc kubenswrapper[4760]: W0123 18:24:46.441130 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod108fc09d_d5b2_41bc_b2dd_f2edb1847366.slice/crio-54edca1fad556dd541373bd963840135de68c6a051de30fb08a1588d7af675dc WatchSource:0}: Error finding container 54edca1fad556dd541373bd963840135de68c6a051de30fb08a1588d7af675dc: Status 404 returned error can't find the container with id 54edca1fad556dd541373bd963840135de68c6a051de30fb08a1588d7af675dc Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.443876 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:24:46 crc kubenswrapper[4760]: I0123 18:24:46.548379 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:24:47 crc kubenswrapper[4760]: I0123 18:24:47.189322 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:47 crc kubenswrapper[4760]: I0123 18:24:47.189686 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:47 crc kubenswrapper[4760]: I0123 18:24:47.366055 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1c1aa6a7-0392-4091-b65f-69e5e224288c","Type":"ContainerStarted","Data":"ab1e3814708c165424d1c99f410b057c7bc5736feb6bc94f0ac325dc386f0493"} Jan 23 18:24:47 crc kubenswrapper[4760]: I0123 18:24:47.368359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562"} Jan 23 18:24:47 crc kubenswrapper[4760]: I0123 18:24:47.370508 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"108fc09d-d5b2-41bc-b2dd-f2edb1847366","Type":"ContainerStarted","Data":"54edca1fad556dd541373bd963840135de68c6a051de30fb08a1588d7af675dc"} Jan 23 18:24:47 crc kubenswrapper[4760]: I0123 18:24:47.610565 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d4b29c-f0c7-4d78-9db9-72e58e26360a" path="/var/lib/kubelet/pods/00d4b29c-f0c7-4d78-9db9-72e58e26360a/volumes" Jan 23 18:24:47 crc kubenswrapper[4760]: I0123 18:24:47.611578 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916c4314-f639-42ce-9c84-48c7b1c4df05" path="/var/lib/kubelet/pods/916c4314-f639-42ce-9c84-48c7b1c4df05/volumes" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.116768 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-w76b9"] Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.118690 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.122452 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.136155 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-w76b9"] Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.206561 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.206651 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.206683 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.206734 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc46q\" (UniqueName: \"kubernetes.io/projected/71980c08-d59f-48af-97e3-e7108bf0f2bb-kube-api-access-hc46q\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.206760 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-config\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.206806 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.246979 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7bmkv" podUID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerName="registry-server" probeResult="failure" output=< Jan 23 18:24:48 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 23 18:24:48 crc kubenswrapper[4760]: > Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.256804 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-w76b9"] Jan 23 18:24:48 crc kubenswrapper[4760]: E0123 18:24:48.257461 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-hc46q openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" podUID="71980c08-d59f-48af-97e3-e7108bf0f2bb" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.282391 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-8t5dz"] Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.283970 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.305086 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-8t5dz"] Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.310911 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311005 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc46q\" (UniqueName: \"kubernetes.io/projected/71980c08-d59f-48af-97e3-e7108bf0f2bb-kube-api-access-hc46q\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311042 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311074 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-config\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311152 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311176 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311211 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311279 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpqdh\" (UniqueName: \"kubernetes.io/projected/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-kube-api-access-dpqdh\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311304 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311331 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-config\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311423 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.311918 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.312224 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.312243 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-config\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.312283 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.312331 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.343989 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc46q\" (UniqueName: \"kubernetes.io/projected/71980c08-d59f-48af-97e3-e7108bf0f2bb-kube-api-access-hc46q\") pod \"dnsmasq-dns-6447ccbd8f-w76b9\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.379560 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"108fc09d-d5b2-41bc-b2dd-f2edb1847366","Type":"ContainerStarted","Data":"73ac9a73291f81c537679348c36071775915e8bcecba27aeebf9edfcbe776dd7"} Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.381306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1c1aa6a7-0392-4091-b65f-69e5e224288c","Type":"ContainerStarted","Data":"300d69cdcc8b621b0d5401a28011099b96e25472ab038f3516bd523783ba24ae"} Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.381330 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.390705 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.412683 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.412822 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.412867 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.412926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpqdh\" (UniqueName: \"kubernetes.io/projected/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-kube-api-access-dpqdh\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.412951 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-config\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.412985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.413821 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.413849 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.414014 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-config\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.414175 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.414249 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.437372 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpqdh\" (UniqueName: \"kubernetes.io/projected/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-kube-api-access-dpqdh\") pod \"dnsmasq-dns-864d5fc68c-8t5dz\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.514773 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-dns-svc\") pod \"71980c08-d59f-48af-97e3-e7108bf0f2bb\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.514880 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-openstack-edpm-ipam\") pod \"71980c08-d59f-48af-97e3-e7108bf0f2bb\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.514907 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-sb\") pod \"71980c08-d59f-48af-97e3-e7108bf0f2bb\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.514928 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-config\") pod \"71980c08-d59f-48af-97e3-e7108bf0f2bb\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.514948 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-nb\") pod \"71980c08-d59f-48af-97e3-e7108bf0f2bb\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.514975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc46q\" (UniqueName: \"kubernetes.io/projected/71980c08-d59f-48af-97e3-e7108bf0f2bb-kube-api-access-hc46q\") pod \"71980c08-d59f-48af-97e3-e7108bf0f2bb\" (UID: \"71980c08-d59f-48af-97e3-e7108bf0f2bb\") " Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.515506 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "71980c08-d59f-48af-97e3-e7108bf0f2bb" (UID: "71980c08-d59f-48af-97e3-e7108bf0f2bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.515522 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71980c08-d59f-48af-97e3-e7108bf0f2bb" (UID: "71980c08-d59f-48af-97e3-e7108bf0f2bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.515569 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "71980c08-d59f-48af-97e3-e7108bf0f2bb" (UID: "71980c08-d59f-48af-97e3-e7108bf0f2bb"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.516015 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-config" (OuterVolumeSpecName: "config") pod "71980c08-d59f-48af-97e3-e7108bf0f2bb" (UID: "71980c08-d59f-48af-97e3-e7108bf0f2bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.516220 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71980c08-d59f-48af-97e3-e7108bf0f2bb" (UID: "71980c08-d59f-48af-97e3-e7108bf0f2bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.531252 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71980c08-d59f-48af-97e3-e7108bf0f2bb-kube-api-access-hc46q" (OuterVolumeSpecName: "kube-api-access-hc46q") pod "71980c08-d59f-48af-97e3-e7108bf0f2bb" (UID: "71980c08-d59f-48af-97e3-e7108bf0f2bb"). InnerVolumeSpecName "kube-api-access-hc46q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.602650 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.617622 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.617655 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.617667 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.617676 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.617685 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71980c08-d59f-48af-97e3-e7108bf0f2bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:48 crc kubenswrapper[4760]: I0123 18:24:48.617693 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc46q\" (UniqueName: \"kubernetes.io/projected/71980c08-d59f-48af-97e3-e7108bf0f2bb-kube-api-access-hc46q\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:49 crc kubenswrapper[4760]: W0123 18:24:49.052156 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1899a1_9b41_4804_8aa1_6fc4d97a94aa.slice/crio-883654ece72feb9aeacc8c16ca733dfc5f1d024e6a3b7568d384d279fdae0d22 WatchSource:0}: Error finding container 883654ece72feb9aeacc8c16ca733dfc5f1d024e6a3b7568d384d279fdae0d22: Status 404 returned error can't find the container with id 883654ece72feb9aeacc8c16ca733dfc5f1d024e6a3b7568d384d279fdae0d22 Jan 23 18:24:49 crc kubenswrapper[4760]: I0123 18:24:49.053187 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-8t5dz"] Jan 23 18:24:49 crc kubenswrapper[4760]: I0123 18:24:49.395013 4760 generic.go:334] "Generic (PLEG): container finished" podID="8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" containerID="b6e27062c709616b88da49d5a34d97fcadbfbfbc0a9cb8324921aaa0246eb53e" exitCode=0 Jan 23 18:24:49 crc kubenswrapper[4760]: I0123 18:24:49.395123 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" event={"ID":"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa","Type":"ContainerDied","Data":"b6e27062c709616b88da49d5a34d97fcadbfbfbc0a9cb8324921aaa0246eb53e"} Jan 23 18:24:49 crc kubenswrapper[4760]: I0123 18:24:49.395474 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-w76b9" Jan 23 18:24:49 crc kubenswrapper[4760]: I0123 18:24:49.395541 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" event={"ID":"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa","Type":"ContainerStarted","Data":"883654ece72feb9aeacc8c16ca733dfc5f1d024e6a3b7568d384d279fdae0d22"} Jan 23 18:24:49 crc kubenswrapper[4760]: I0123 18:24:49.613479 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-w76b9"] Jan 23 18:24:49 crc kubenswrapper[4760]: I0123 18:24:49.620624 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-w76b9"] Jan 23 18:24:50 crc kubenswrapper[4760]: I0123 18:24:50.408248 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" event={"ID":"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa","Type":"ContainerStarted","Data":"89994adfba93773ab056eb44bf39c12d27142bbb4da8cf5f012c950c011e6853"} Jan 23 18:24:50 crc kubenswrapper[4760]: I0123 18:24:50.409163 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:50 crc kubenswrapper[4760]: I0123 18:24:50.438065 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" podStartSLOduration=2.438044645 podStartE2EDuration="2.438044645s" podCreationTimestamp="2026-01-23 18:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:24:50.431693917 +0000 UTC m=+1433.434151870" watchObservedRunningTime="2026-01-23 18:24:50.438044645 +0000 UTC m=+1433.440502608" Jan 23 18:24:51 crc kubenswrapper[4760]: I0123 18:24:51.604945 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71980c08-d59f-48af-97e3-e7108bf0f2bb" path="/var/lib/kubelet/pods/71980c08-d59f-48af-97e3-e7108bf0f2bb/volumes" Jan 23 18:24:57 crc kubenswrapper[4760]: I0123 18:24:57.248177 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:57 crc kubenswrapper[4760]: I0123 18:24:57.305115 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:57 crc kubenswrapper[4760]: I0123 18:24:57.491628 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7bmkv"] Jan 23 18:24:58 crc kubenswrapper[4760]: I0123 18:24:58.478053 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7bmkv" podUID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerName="registry-server" containerID="cri-o://8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324" gracePeriod=2 Jan 23 18:24:58 crc kubenswrapper[4760]: I0123 18:24:58.604529 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:24:58 crc kubenswrapper[4760]: I0123 18:24:58.703392 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ls9tt"] Jan 23 18:24:58 crc kubenswrapper[4760]: I0123 18:24:58.712600 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" podUID="adfa5ed3-d512-4e7a-b912-54a3a7882a0e" containerName="dnsmasq-dns" containerID="cri-o://c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab" gracePeriod=10 Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.055579 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.151542 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.211870 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-utilities\") pod \"765a5365-6c1a-494c-b4a6-28998cd10df5\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.212014 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-catalog-content\") pod \"765a5365-6c1a-494c-b4a6-28998cd10df5\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.212127 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xllf\" (UniqueName: \"kubernetes.io/projected/765a5365-6c1a-494c-b4a6-28998cd10df5-kube-api-access-8xllf\") pod \"765a5365-6c1a-494c-b4a6-28998cd10df5\" (UID: \"765a5365-6c1a-494c-b4a6-28998cd10df5\") " Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.212483 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-utilities" (OuterVolumeSpecName: "utilities") pod "765a5365-6c1a-494c-b4a6-28998cd10df5" (UID: "765a5365-6c1a-494c-b4a6-28998cd10df5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.212711 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.216901 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765a5365-6c1a-494c-b4a6-28998cd10df5-kube-api-access-8xllf" (OuterVolumeSpecName: "kube-api-access-8xllf") pod "765a5365-6c1a-494c-b4a6-28998cd10df5" (UID: "765a5365-6c1a-494c-b4a6-28998cd10df5"). InnerVolumeSpecName "kube-api-access-8xllf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.313328 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-config\") pod \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.313527 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-nb\") pod \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.313603 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-dns-svc\") pod \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.313636 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv442\" (UniqueName: \"kubernetes.io/projected/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-kube-api-access-sv442\") pod \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.313679 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-sb\") pod \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\" (UID: \"adfa5ed3-d512-4e7a-b912-54a3a7882a0e\") " Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.314072 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xllf\" (UniqueName: \"kubernetes.io/projected/765a5365-6c1a-494c-b4a6-28998cd10df5-kube-api-access-8xllf\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.318473 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-kube-api-access-sv442" (OuterVolumeSpecName: "kube-api-access-sv442") pod "adfa5ed3-d512-4e7a-b912-54a3a7882a0e" (UID: "adfa5ed3-d512-4e7a-b912-54a3a7882a0e"). InnerVolumeSpecName "kube-api-access-sv442". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.322723 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "765a5365-6c1a-494c-b4a6-28998cd10df5" (UID: "765a5365-6c1a-494c-b4a6-28998cd10df5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.360404 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "adfa5ed3-d512-4e7a-b912-54a3a7882a0e" (UID: "adfa5ed3-d512-4e7a-b912-54a3a7882a0e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.360205 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "adfa5ed3-d512-4e7a-b912-54a3a7882a0e" (UID: "adfa5ed3-d512-4e7a-b912-54a3a7882a0e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.366844 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "adfa5ed3-d512-4e7a-b912-54a3a7882a0e" (UID: "adfa5ed3-d512-4e7a-b912-54a3a7882a0e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.369479 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-config" (OuterVolumeSpecName: "config") pod "adfa5ed3-d512-4e7a-b912-54a3a7882a0e" (UID: "adfa5ed3-d512-4e7a-b912-54a3a7882a0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.415217 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.415264 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765a5365-6c1a-494c-b4a6-28998cd10df5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.415282 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.415294 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.415306 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv442\" (UniqueName: \"kubernetes.io/projected/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-kube-api-access-sv442\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.415317 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfa5ed3-d512-4e7a-b912-54a3a7882a0e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.486917 4760 generic.go:334] "Generic (PLEG): container finished" podID="adfa5ed3-d512-4e7a-b912-54a3a7882a0e" containerID="c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab" exitCode=0 Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.486989 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.486998 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" event={"ID":"adfa5ed3-d512-4e7a-b912-54a3a7882a0e","Type":"ContainerDied","Data":"c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab"} Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.487113 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-ls9tt" event={"ID":"adfa5ed3-d512-4e7a-b912-54a3a7882a0e","Type":"ContainerDied","Data":"e371438e2a01402d5cff8ee4ea45a25301a7fa02124e9416e4da8d3652b70469"} Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.487131 4760 scope.go:117] "RemoveContainer" containerID="c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.489650 4760 generic.go:334] "Generic (PLEG): container finished" podID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerID="8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324" exitCode=0 Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.489682 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmkv" event={"ID":"765a5365-6c1a-494c-b4a6-28998cd10df5","Type":"ContainerDied","Data":"8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324"} Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.489703 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmkv" event={"ID":"765a5365-6c1a-494c-b4a6-28998cd10df5","Type":"ContainerDied","Data":"14cac6ef3911c34f129fbd08c2e466b0157b65d7a2a950cf68e7acd1610647fc"} Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.489774 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bmkv" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.506300 4760 scope.go:117] "RemoveContainer" containerID="b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.527517 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7bmkv"] Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.539505 4760 scope.go:117] "RemoveContainer" containerID="c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab" Jan 23 18:24:59 crc kubenswrapper[4760]: E0123 18:24:59.539905 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab\": container with ID starting with c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab not found: ID does not exist" containerID="c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.539936 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab"} err="failed to get container status \"c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab\": rpc error: code = NotFound desc = could not find container \"c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab\": container with ID starting with c1a98036bf6dc654c93c1dceb7c61a46d6ec7cfa61ad14f4d8e3bd7c66a121ab not found: ID does not exist" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.539961 4760 scope.go:117] "RemoveContainer" containerID="b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39" Jan 23 18:24:59 crc kubenswrapper[4760]: E0123 18:24:59.540242 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39\": container with ID starting with b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39 not found: ID does not exist" containerID="b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.540286 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39"} err="failed to get container status \"b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39\": rpc error: code = NotFound desc = could not find container \"b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39\": container with ID starting with b6b895ae82fde98b99937f5f9f316428a5a8df7b2f44ef84f687984eeab96f39 not found: ID does not exist" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.540317 4760 scope.go:117] "RemoveContainer" containerID="8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.542739 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7bmkv"] Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.552018 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ls9tt"] Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.559568 4760 scope.go:117] "RemoveContainer" containerID="9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.561188 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ls9tt"] Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.582539 4760 scope.go:117] "RemoveContainer" containerID="722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.610861 4760 scope.go:117] "RemoveContainer" containerID="8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324" Jan 23 18:24:59 crc kubenswrapper[4760]: E0123 18:24:59.611261 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324\": container with ID starting with 8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324 not found: ID does not exist" containerID="8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.611311 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324"} err="failed to get container status \"8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324\": rpc error: code = NotFound desc = could not find container \"8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324\": container with ID starting with 8e30517233925977a1a028dba60faae0c60af5718493e90a6579b62d08696324 not found: ID does not exist" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.611343 4760 scope.go:117] "RemoveContainer" containerID="9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd" Jan 23 18:24:59 crc kubenswrapper[4760]: E0123 18:24:59.611735 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd\": container with ID starting with 9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd not found: ID does not exist" containerID="9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.611780 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd"} err="failed to get container status \"9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd\": rpc error: code = NotFound desc = could not find container \"9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd\": container with ID starting with 9807be02fa888519732cef436d6e8e5d376b1cc3d281b45cf2c40912228b31bd not found: ID does not exist" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.611810 4760 scope.go:117] "RemoveContainer" containerID="722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.611876 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765a5365-6c1a-494c-b4a6-28998cd10df5" path="/var/lib/kubelet/pods/765a5365-6c1a-494c-b4a6-28998cd10df5/volumes" Jan 23 18:24:59 crc kubenswrapper[4760]: E0123 18:24:59.612260 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e\": container with ID starting with 722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e not found: ID does not exist" containerID="722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.612295 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e"} err="failed to get container status \"722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e\": rpc error: code = NotFound desc = could not find container \"722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e\": container with ID starting with 722ec3f703b907a6a20eedfaa0589fb687f19a7c2a3d1420fce3684807965b0e not found: ID does not exist" Jan 23 18:24:59 crc kubenswrapper[4760]: I0123 18:24:59.613338 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adfa5ed3-d512-4e7a-b912-54a3a7882a0e" path="/var/lib/kubelet/pods/adfa5ed3-d512-4e7a-b912-54a3a7882a0e/volumes" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.419062 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz"] Jan 23 18:25:04 crc kubenswrapper[4760]: E0123 18:25:04.420728 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerName="extract-content" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.420773 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerName="extract-content" Jan 23 18:25:04 crc kubenswrapper[4760]: E0123 18:25:04.420794 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfa5ed3-d512-4e7a-b912-54a3a7882a0e" containerName="init" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.420802 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfa5ed3-d512-4e7a-b912-54a3a7882a0e" containerName="init" Jan 23 18:25:04 crc kubenswrapper[4760]: E0123 18:25:04.420854 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfa5ed3-d512-4e7a-b912-54a3a7882a0e" containerName="dnsmasq-dns" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.420864 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfa5ed3-d512-4e7a-b912-54a3a7882a0e" containerName="dnsmasq-dns" Jan 23 18:25:04 crc kubenswrapper[4760]: E0123 18:25:04.420884 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerName="registry-server" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.420892 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerName="registry-server" Jan 23 18:25:04 crc kubenswrapper[4760]: E0123 18:25:04.420909 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerName="extract-utilities" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.420917 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerName="extract-utilities" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.421278 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfa5ed3-d512-4e7a-b912-54a3a7882a0e" containerName="dnsmasq-dns" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.421292 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="765a5365-6c1a-494c-b4a6-28998cd10df5" containerName="registry-server" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.422169 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.425392 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.425827 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.426443 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.427024 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.435543 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz"] Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.511586 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.512101 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.512470 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.512569 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2gq\" (UniqueName: \"kubernetes.io/projected/ae790af6-6150-46ef-9f6f-7d8926590585-kube-api-access-bw2gq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.614943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.615012 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.615056 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.615077 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw2gq\" (UniqueName: \"kubernetes.io/projected/ae790af6-6150-46ef-9f6f-7d8926590585-kube-api-access-bw2gq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.622631 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.622854 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.623495 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.631535 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw2gq\" (UniqueName: \"kubernetes.io/projected/ae790af6-6150-46ef-9f6f-7d8926590585-kube-api-access-bw2gq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:04 crc kubenswrapper[4760]: I0123 18:25:04.745242 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:05 crc kubenswrapper[4760]: I0123 18:25:05.248179 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz"] Jan 23 18:25:05 crc kubenswrapper[4760]: W0123 18:25:05.252293 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae790af6_6150_46ef_9f6f_7d8926590585.slice/crio-29d26267d95f3cc9a97e85e9cb844534cf1f4c7a93cf6a0b6af42086655c8c3c WatchSource:0}: Error finding container 29d26267d95f3cc9a97e85e9cb844534cf1f4c7a93cf6a0b6af42086655c8c3c: Status 404 returned error can't find the container with id 29d26267d95f3cc9a97e85e9cb844534cf1f4c7a93cf6a0b6af42086655c8c3c Jan 23 18:25:05 crc kubenswrapper[4760]: I0123 18:25:05.552934 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" event={"ID":"ae790af6-6150-46ef-9f6f-7d8926590585","Type":"ContainerStarted","Data":"29d26267d95f3cc9a97e85e9cb844534cf1f4c7a93cf6a0b6af42086655c8c3c"} Jan 23 18:25:13 crc kubenswrapper[4760]: I0123 18:25:13.621420 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" event={"ID":"ae790af6-6150-46ef-9f6f-7d8926590585","Type":"ContainerStarted","Data":"8bf3d5e4bd952c450d25430e9cbe3b3c75513090227325b4caccb1a2b8ae1b0a"} Jan 23 18:25:20 crc kubenswrapper[4760]: I0123 18:25:20.686796 4760 generic.go:334] "Generic (PLEG): container finished" podID="108fc09d-d5b2-41bc-b2dd-f2edb1847366" containerID="73ac9a73291f81c537679348c36071775915e8bcecba27aeebf9edfcbe776dd7" exitCode=0 Jan 23 18:25:20 crc kubenswrapper[4760]: I0123 18:25:20.686934 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"108fc09d-d5b2-41bc-b2dd-f2edb1847366","Type":"ContainerDied","Data":"73ac9a73291f81c537679348c36071775915e8bcecba27aeebf9edfcbe776dd7"} Jan 23 18:25:20 crc kubenswrapper[4760]: I0123 18:25:20.689588 4760 generic.go:334] "Generic (PLEG): container finished" podID="1c1aa6a7-0392-4091-b65f-69e5e224288c" containerID="300d69cdcc8b621b0d5401a28011099b96e25472ab038f3516bd523783ba24ae" exitCode=0 Jan 23 18:25:20 crc kubenswrapper[4760]: I0123 18:25:20.689650 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1c1aa6a7-0392-4091-b65f-69e5e224288c","Type":"ContainerDied","Data":"300d69cdcc8b621b0d5401a28011099b96e25472ab038f3516bd523783ba24ae"} Jan 23 18:25:20 crc kubenswrapper[4760]: I0123 18:25:20.729675 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" podStartSLOduration=9.27197042 podStartE2EDuration="16.729639911s" podCreationTimestamp="2026-01-23 18:25:04 +0000 UTC" firstStartedPulling="2026-01-23 18:25:05.253982292 +0000 UTC m=+1448.256440225" lastFinishedPulling="2026-01-23 18:25:12.711651783 +0000 UTC m=+1455.714109716" observedRunningTime="2026-01-23 18:25:13.64729705 +0000 UTC m=+1456.649754983" watchObservedRunningTime="2026-01-23 18:25:20.729639911 +0000 UTC m=+1463.732097844" Jan 23 18:25:21 crc kubenswrapper[4760]: I0123 18:25:21.700288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"108fc09d-d5b2-41bc-b2dd-f2edb1847366","Type":"ContainerStarted","Data":"d50e097cd0c696140a63c91ce8d9884d61b7b0a7d7bd9e8f771e054d9d647998"} Jan 23 18:25:21 crc kubenswrapper[4760]: I0123 18:25:21.701214 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:25:21 crc kubenswrapper[4760]: I0123 18:25:21.702307 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1c1aa6a7-0392-4091-b65f-69e5e224288c","Type":"ContainerStarted","Data":"2a9f52ca63708f486fd49c722caa32aeb0bfff456ad50a921fc09336e535c0f0"} Jan 23 18:25:21 crc kubenswrapper[4760]: I0123 18:25:21.702525 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 18:25:21 crc kubenswrapper[4760]: I0123 18:25:21.731897 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.731872806 podStartE2EDuration="36.731872806s" podCreationTimestamp="2026-01-23 18:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:25:21.725324373 +0000 UTC m=+1464.727782326" watchObservedRunningTime="2026-01-23 18:25:21.731872806 +0000 UTC m=+1464.734330739" Jan 23 18:25:21 crc kubenswrapper[4760]: I0123 18:25:21.764638 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.764617224 podStartE2EDuration="36.764617224s" podCreationTimestamp="2026-01-23 18:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:25:21.753222785 +0000 UTC m=+1464.755680758" watchObservedRunningTime="2026-01-23 18:25:21.764617224 +0000 UTC m=+1464.767075177" Jan 23 18:25:27 crc kubenswrapper[4760]: I0123 18:25:27.780113 4760 generic.go:334] "Generic (PLEG): container finished" podID="ae790af6-6150-46ef-9f6f-7d8926590585" containerID="8bf3d5e4bd952c450d25430e9cbe3b3c75513090227325b4caccb1a2b8ae1b0a" exitCode=0 Jan 23 18:25:27 crc kubenswrapper[4760]: I0123 18:25:27.780186 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" event={"ID":"ae790af6-6150-46ef-9f6f-7d8926590585","Type":"ContainerDied","Data":"8bf3d5e4bd952c450d25430e9cbe3b3c75513090227325b4caccb1a2b8ae1b0a"} Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.175953 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.253650 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-ssh-key-openstack-edpm-ipam\") pod \"ae790af6-6150-46ef-9f6f-7d8926590585\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.253702 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-inventory\") pod \"ae790af6-6150-46ef-9f6f-7d8926590585\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.253781 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-repo-setup-combined-ca-bundle\") pod \"ae790af6-6150-46ef-9f6f-7d8926590585\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.253936 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw2gq\" (UniqueName: \"kubernetes.io/projected/ae790af6-6150-46ef-9f6f-7d8926590585-kube-api-access-bw2gq\") pod \"ae790af6-6150-46ef-9f6f-7d8926590585\" (UID: \"ae790af6-6150-46ef-9f6f-7d8926590585\") " Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.262273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ae790af6-6150-46ef-9f6f-7d8926590585" (UID: "ae790af6-6150-46ef-9f6f-7d8926590585"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.262309 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae790af6-6150-46ef-9f6f-7d8926590585-kube-api-access-bw2gq" (OuterVolumeSpecName: "kube-api-access-bw2gq") pod "ae790af6-6150-46ef-9f6f-7d8926590585" (UID: "ae790af6-6150-46ef-9f6f-7d8926590585"). InnerVolumeSpecName "kube-api-access-bw2gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.282119 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae790af6-6150-46ef-9f6f-7d8926590585" (UID: "ae790af6-6150-46ef-9f6f-7d8926590585"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.284542 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-inventory" (OuterVolumeSpecName: "inventory") pod "ae790af6-6150-46ef-9f6f-7d8926590585" (UID: "ae790af6-6150-46ef-9f6f-7d8926590585"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.356489 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.356797 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.356808 4760 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae790af6-6150-46ef-9f6f-7d8926590585-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.356819 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw2gq\" (UniqueName: \"kubernetes.io/projected/ae790af6-6150-46ef-9f6f-7d8926590585-kube-api-access-bw2gq\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.803446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" event={"ID":"ae790af6-6150-46ef-9f6f-7d8926590585","Type":"ContainerDied","Data":"29d26267d95f3cc9a97e85e9cb844534cf1f4c7a93cf6a0b6af42086655c8c3c"} Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.803506 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29d26267d95f3cc9a97e85e9cb844534cf1f4c7a93cf6a0b6af42086655c8c3c" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.803515 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.930095 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4"] Jan 23 18:25:29 crc kubenswrapper[4760]: E0123 18:25:29.930950 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae790af6-6150-46ef-9f6f-7d8926590585" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.930998 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae790af6-6150-46ef-9f6f-7d8926590585" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.931487 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae790af6-6150-46ef-9f6f-7d8926590585" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.932836 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.935333 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.935383 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.937071 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.940262 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.942772 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4"] Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.967547 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.967736 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.967951 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6n5m\" (UniqueName: \"kubernetes.io/projected/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-kube-api-access-f6n5m\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:29 crc kubenswrapper[4760]: I0123 18:25:29.968269 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:30 crc kubenswrapper[4760]: I0123 18:25:30.069605 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6n5m\" (UniqueName: \"kubernetes.io/projected/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-kube-api-access-f6n5m\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:30 crc kubenswrapper[4760]: I0123 18:25:30.069714 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:30 crc kubenswrapper[4760]: I0123 18:25:30.069780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:30 crc kubenswrapper[4760]: I0123 18:25:30.069813 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:30 crc kubenswrapper[4760]: I0123 18:25:30.074302 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:30 crc kubenswrapper[4760]: I0123 18:25:30.075128 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:30 crc kubenswrapper[4760]: I0123 18:25:30.075211 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:30 crc kubenswrapper[4760]: I0123 18:25:30.096922 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6n5m\" (UniqueName: \"kubernetes.io/projected/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-kube-api-access-f6n5m\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:30 crc kubenswrapper[4760]: I0123 18:25:30.254934 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:25:30 crc kubenswrapper[4760]: I0123 18:25:30.816067 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4"] Jan 23 18:25:31 crc kubenswrapper[4760]: I0123 18:25:31.831478 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" event={"ID":"0191ba0e-f1c2-4a80-ae05-bc968ba09aec","Type":"ContainerStarted","Data":"06e333271640d2ad0336195dbc2c05bf50e5cb899716fce4146a49b4993599f9"} Jan 23 18:25:31 crc kubenswrapper[4760]: I0123 18:25:31.831898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" event={"ID":"0191ba0e-f1c2-4a80-ae05-bc968ba09aec","Type":"ContainerStarted","Data":"667f734f6e3334b6505e409f05338cefd95b2adcc953c8e0962231be1a6ede78"} Jan 23 18:25:31 crc kubenswrapper[4760]: I0123 18:25:31.860902 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" podStartSLOduration=2.294832723 podStartE2EDuration="2.86088608s" podCreationTimestamp="2026-01-23 18:25:29 +0000 UTC" firstStartedPulling="2026-01-23 18:25:30.810724942 +0000 UTC m=+1473.813182875" lastFinishedPulling="2026-01-23 18:25:31.376778299 +0000 UTC m=+1474.379236232" observedRunningTime="2026-01-23 18:25:31.857122404 +0000 UTC m=+1474.859580338" watchObservedRunningTime="2026-01-23 18:25:31.86088608 +0000 UTC m=+1474.863344013" Jan 23 18:25:35 crc kubenswrapper[4760]: I0123 18:25:35.982664 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:25:36 crc kubenswrapper[4760]: I0123 18:25:36.065922 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 18:26:46 crc kubenswrapper[4760]: I0123 18:26:46.075142 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:26:46 crc kubenswrapper[4760]: I0123 18:26:46.075684 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:26:58 crc kubenswrapper[4760]: I0123 18:26:58.916864 4760 scope.go:117] "RemoveContainer" containerID="49eaff5ec8a7ff18c5db42e2fb32fb6ee2821ce355c78ee2d8d584151cccbb5b" Jan 23 18:26:58 crc kubenswrapper[4760]: I0123 18:26:58.947794 4760 scope.go:117] "RemoveContainer" containerID="30b76e733373960a876cabab69dcb0f5698d16ae09790b03b72f547e1b1050ae" Jan 23 18:26:59 crc kubenswrapper[4760]: I0123 18:26:59.010709 4760 scope.go:117] "RemoveContainer" containerID="0dd956d7ca4b5cf9eff761ce1aa02ff09a89c1b3d512ef742967c784ca7b90aa" Jan 23 18:26:59 crc kubenswrapper[4760]: I0123 18:26:59.033834 4760 scope.go:117] "RemoveContainer" containerID="4944bba3629f638b2143930ccfbba37215e4e646b502848974517e8dabb05253" Jan 23 18:26:59 crc kubenswrapper[4760]: I0123 18:26:59.099428 4760 scope.go:117] "RemoveContainer" containerID="01e490445413872c436809c30f5371018345c884c451493a52ae537253d573df" Jan 23 18:26:59 crc kubenswrapper[4760]: I0123 18:26:59.125623 4760 scope.go:117] "RemoveContainer" containerID="ba9c77afa0c4cb498713a8f54feaaf670cee6749d8b28e318f62738626d23944" Jan 23 18:26:59 crc kubenswrapper[4760]: I0123 18:26:59.152976 4760 scope.go:117] "RemoveContainer" containerID="968cde41f75ddc69b1039cf84a52a7ecd54d355cddf514647e95aa801d876f48" Jan 23 18:27:16 crc kubenswrapper[4760]: I0123 18:27:16.075813 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:27:16 crc kubenswrapper[4760]: I0123 18:27:16.076507 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.463157 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-89vj7"] Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.465776 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.493264 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89vj7"] Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.602739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4dkv\" (UniqueName: \"kubernetes.io/projected/4118facc-214c-4f83-927d-c5ab8555885d-kube-api-access-m4dkv\") pod \"redhat-marketplace-89vj7\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.602808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-utilities\") pod \"redhat-marketplace-89vj7\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.603103 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-catalog-content\") pod \"redhat-marketplace-89vj7\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.706473 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4dkv\" (UniqueName: \"kubernetes.io/projected/4118facc-214c-4f83-927d-c5ab8555885d-kube-api-access-m4dkv\") pod \"redhat-marketplace-89vj7\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.706941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-utilities\") pod \"redhat-marketplace-89vj7\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.707428 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-utilities\") pod \"redhat-marketplace-89vj7\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.707561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-catalog-content\") pod \"redhat-marketplace-89vj7\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.708127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-catalog-content\") pod \"redhat-marketplace-89vj7\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.728259 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4dkv\" (UniqueName: \"kubernetes.io/projected/4118facc-214c-4f83-927d-c5ab8555885d-kube-api-access-m4dkv\") pod \"redhat-marketplace-89vj7\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:20 crc kubenswrapper[4760]: I0123 18:27:20.788168 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:21 crc kubenswrapper[4760]: I0123 18:27:21.250110 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89vj7"] Jan 23 18:27:22 crc kubenswrapper[4760]: I0123 18:27:21.999653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89vj7" event={"ID":"4118facc-214c-4f83-927d-c5ab8555885d","Type":"ContainerStarted","Data":"37d97f007293af5816f19edf15799d7698e1bbe0c8e70abb9207d2f82ceb9e45"} Jan 23 18:27:23 crc kubenswrapper[4760]: I0123 18:27:23.009359 4760 generic.go:334] "Generic (PLEG): container finished" podID="4118facc-214c-4f83-927d-c5ab8555885d" containerID="8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901" exitCode=0 Jan 23 18:27:23 crc kubenswrapper[4760]: I0123 18:27:23.009399 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89vj7" event={"ID":"4118facc-214c-4f83-927d-c5ab8555885d","Type":"ContainerDied","Data":"8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901"} Jan 23 18:27:24 crc kubenswrapper[4760]: I0123 18:27:24.020326 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89vj7" event={"ID":"4118facc-214c-4f83-927d-c5ab8555885d","Type":"ContainerStarted","Data":"ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6"} Jan 23 18:27:25 crc kubenswrapper[4760]: I0123 18:27:25.035132 4760 generic.go:334] "Generic (PLEG): container finished" podID="4118facc-214c-4f83-927d-c5ab8555885d" containerID="ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6" exitCode=0 Jan 23 18:27:25 crc kubenswrapper[4760]: I0123 18:27:25.035197 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89vj7" event={"ID":"4118facc-214c-4f83-927d-c5ab8555885d","Type":"ContainerDied","Data":"ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6"} Jan 23 18:27:26 crc kubenswrapper[4760]: I0123 18:27:26.047373 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89vj7" event={"ID":"4118facc-214c-4f83-927d-c5ab8555885d","Type":"ContainerStarted","Data":"ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b"} Jan 23 18:27:26 crc kubenswrapper[4760]: I0123 18:27:26.073184 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-89vj7" podStartSLOduration=3.543199158 podStartE2EDuration="6.073167688s" podCreationTimestamp="2026-01-23 18:27:20 +0000 UTC" firstStartedPulling="2026-01-23 18:27:23.01099259 +0000 UTC m=+1586.013450523" lastFinishedPulling="2026-01-23 18:27:25.54096112 +0000 UTC m=+1588.543419053" observedRunningTime="2026-01-23 18:27:26.069903127 +0000 UTC m=+1589.072361070" watchObservedRunningTime="2026-01-23 18:27:26.073167688 +0000 UTC m=+1589.075625621" Jan 23 18:27:30 crc kubenswrapper[4760]: I0123 18:27:30.789003 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:30 crc kubenswrapper[4760]: I0123 18:27:30.789500 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:30 crc kubenswrapper[4760]: I0123 18:27:30.840485 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:31 crc kubenswrapper[4760]: I0123 18:27:31.145046 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:31 crc kubenswrapper[4760]: I0123 18:27:31.203371 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89vj7"] Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.115132 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-89vj7" podUID="4118facc-214c-4f83-927d-c5ab8555885d" containerName="registry-server" containerID="cri-o://ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b" gracePeriod=2 Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.554583 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.656698 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-utilities\") pod \"4118facc-214c-4f83-927d-c5ab8555885d\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.657182 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-catalog-content\") pod \"4118facc-214c-4f83-927d-c5ab8555885d\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.657287 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4dkv\" (UniqueName: \"kubernetes.io/projected/4118facc-214c-4f83-927d-c5ab8555885d-kube-api-access-m4dkv\") pod \"4118facc-214c-4f83-927d-c5ab8555885d\" (UID: \"4118facc-214c-4f83-927d-c5ab8555885d\") " Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.660494 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-utilities" (OuterVolumeSpecName: "utilities") pod "4118facc-214c-4f83-927d-c5ab8555885d" (UID: "4118facc-214c-4f83-927d-c5ab8555885d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.675539 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4118facc-214c-4f83-927d-c5ab8555885d-kube-api-access-m4dkv" (OuterVolumeSpecName: "kube-api-access-m4dkv") pod "4118facc-214c-4f83-927d-c5ab8555885d" (UID: "4118facc-214c-4f83-927d-c5ab8555885d"). InnerVolumeSpecName "kube-api-access-m4dkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.681298 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4118facc-214c-4f83-927d-c5ab8555885d" (UID: "4118facc-214c-4f83-927d-c5ab8555885d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.759685 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.759740 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4118facc-214c-4f83-927d-c5ab8555885d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:33 crc kubenswrapper[4760]: I0123 18:27:33.759754 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4dkv\" (UniqueName: \"kubernetes.io/projected/4118facc-214c-4f83-927d-c5ab8555885d-kube-api-access-m4dkv\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.128366 4760 generic.go:334] "Generic (PLEG): container finished" podID="4118facc-214c-4f83-927d-c5ab8555885d" containerID="ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b" exitCode=0 Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.128427 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89vj7" event={"ID":"4118facc-214c-4f83-927d-c5ab8555885d","Type":"ContainerDied","Data":"ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b"} Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.128460 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89vj7" event={"ID":"4118facc-214c-4f83-927d-c5ab8555885d","Type":"ContainerDied","Data":"37d97f007293af5816f19edf15799d7698e1bbe0c8e70abb9207d2f82ceb9e45"} Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.128480 4760 scope.go:117] "RemoveContainer" containerID="ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b" Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.128530 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89vj7" Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.148536 4760 scope.go:117] "RemoveContainer" containerID="ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6" Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.176517 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89vj7"] Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.180675 4760 scope.go:117] "RemoveContainer" containerID="8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901" Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.189196 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-89vj7"] Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.215650 4760 scope.go:117] "RemoveContainer" containerID="ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b" Jan 23 18:27:34 crc kubenswrapper[4760]: E0123 18:27:34.216244 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b\": container with ID starting with ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b not found: ID does not exist" containerID="ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b" Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.216287 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b"} err="failed to get container status \"ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b\": rpc error: code = NotFound desc = could not find container \"ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b\": container with ID starting with ff8dd0892bed715b2b0123a37f77a1369b9be2523f351bf3ca9cfba68a763e0b not found: ID does not exist" Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.216323 4760 scope.go:117] "RemoveContainer" containerID="ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6" Jan 23 18:27:34 crc kubenswrapper[4760]: E0123 18:27:34.216848 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6\": container with ID starting with ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6 not found: ID does not exist" containerID="ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6" Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.216888 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6"} err="failed to get container status \"ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6\": rpc error: code = NotFound desc = could not find container \"ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6\": container with ID starting with ade9e1341cf24b19ac80100ccddb959d24c6c1aded3976e6da3bde5779a36ce6 not found: ID does not exist" Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.216908 4760 scope.go:117] "RemoveContainer" containerID="8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901" Jan 23 18:27:34 crc kubenswrapper[4760]: E0123 18:27:34.217189 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901\": container with ID starting with 8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901 not found: ID does not exist" containerID="8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901" Jan 23 18:27:34 crc kubenswrapper[4760]: I0123 18:27:34.217213 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901"} err="failed to get container status \"8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901\": rpc error: code = NotFound desc = could not find container \"8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901\": container with ID starting with 8e8091c0a4f9f5de640f9ca5922f220d73b2a450dece417bf07d659e94c1d901 not found: ID does not exist" Jan 23 18:27:35 crc kubenswrapper[4760]: I0123 18:27:35.606239 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4118facc-214c-4f83-927d-c5ab8555885d" path="/var/lib/kubelet/pods/4118facc-214c-4f83-927d-c5ab8555885d/volumes" Jan 23 18:27:46 crc kubenswrapper[4760]: I0123 18:27:46.075364 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:27:46 crc kubenswrapper[4760]: I0123 18:27:46.075971 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:27:46 crc kubenswrapper[4760]: I0123 18:27:46.076023 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:27:46 crc kubenswrapper[4760]: I0123 18:27:46.076903 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:27:46 crc kubenswrapper[4760]: I0123 18:27:46.076998 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" gracePeriod=600 Jan 23 18:27:46 crc kubenswrapper[4760]: E0123 18:27:46.198569 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:27:46 crc kubenswrapper[4760]: I0123 18:27:46.251361 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" exitCode=0 Jan 23 18:27:46 crc kubenswrapper[4760]: I0123 18:27:46.251430 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562"} Jan 23 18:27:46 crc kubenswrapper[4760]: I0123 18:27:46.251474 4760 scope.go:117] "RemoveContainer" containerID="d10a6c9cff1cc06cc9d41f66b241c8a85945eae00b182bb02ef5740c10c61491" Jan 23 18:27:46 crc kubenswrapper[4760]: I0123 18:27:46.252048 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:27:46 crc kubenswrapper[4760]: E0123 18:27:46.252376 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:27:59 crc kubenswrapper[4760]: I0123 18:27:59.302069 4760 scope.go:117] "RemoveContainer" containerID="2c8c7eef1616f43f0f97bf1f7c2a50971cc464c1d8686eb0dbe7503df29aaee3" Jan 23 18:27:59 crc kubenswrapper[4760]: I0123 18:27:59.327946 4760 scope.go:117] "RemoveContainer" containerID="296eb75f6869d405f821a570e52f26fa90f16b0254871df72af68b267051027b" Jan 23 18:28:00 crc kubenswrapper[4760]: I0123 18:28:00.595885 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:28:00 crc kubenswrapper[4760]: E0123 18:28:00.596480 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:28:13 crc kubenswrapper[4760]: I0123 18:28:13.595605 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:28:13 crc kubenswrapper[4760]: E0123 18:28:13.596157 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:28:27 crc kubenswrapper[4760]: I0123 18:28:27.601116 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:28:27 crc kubenswrapper[4760]: E0123 18:28:27.602305 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:28:41 crc kubenswrapper[4760]: I0123 18:28:41.596617 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:28:41 crc kubenswrapper[4760]: E0123 18:28:41.597512 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:28:44 crc kubenswrapper[4760]: I0123 18:28:44.870938 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xbcs7"] Jan 23 18:28:44 crc kubenswrapper[4760]: E0123 18:28:44.872028 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4118facc-214c-4f83-927d-c5ab8555885d" containerName="extract-utilities" Jan 23 18:28:44 crc kubenswrapper[4760]: I0123 18:28:44.872049 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4118facc-214c-4f83-927d-c5ab8555885d" containerName="extract-utilities" Jan 23 18:28:44 crc kubenswrapper[4760]: E0123 18:28:44.872076 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4118facc-214c-4f83-927d-c5ab8555885d" containerName="extract-content" Jan 23 18:28:44 crc kubenswrapper[4760]: I0123 18:28:44.872086 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4118facc-214c-4f83-927d-c5ab8555885d" containerName="extract-content" Jan 23 18:28:44 crc kubenswrapper[4760]: E0123 18:28:44.872114 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4118facc-214c-4f83-927d-c5ab8555885d" containerName="registry-server" Jan 23 18:28:44 crc kubenswrapper[4760]: I0123 18:28:44.872124 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4118facc-214c-4f83-927d-c5ab8555885d" containerName="registry-server" Jan 23 18:28:44 crc kubenswrapper[4760]: I0123 18:28:44.872368 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4118facc-214c-4f83-927d-c5ab8555885d" containerName="registry-server" Jan 23 18:28:44 crc kubenswrapper[4760]: I0123 18:28:44.874131 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:44 crc kubenswrapper[4760]: I0123 18:28:44.881674 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbcs7"] Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.037254 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47bxj\" (UniqueName: \"kubernetes.io/projected/f6c999e5-5263-4f04-9548-0f7a217a7bde-kube-api-access-47bxj\") pod \"certified-operators-xbcs7\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.037935 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-catalog-content\") pod \"certified-operators-xbcs7\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.038567 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-utilities\") pod \"certified-operators-xbcs7\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.140074 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-utilities\") pod \"certified-operators-xbcs7\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.140142 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47bxj\" (UniqueName: \"kubernetes.io/projected/f6c999e5-5263-4f04-9548-0f7a217a7bde-kube-api-access-47bxj\") pod \"certified-operators-xbcs7\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.140181 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-catalog-content\") pod \"certified-operators-xbcs7\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.140906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-catalog-content\") pod \"certified-operators-xbcs7\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.140928 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-utilities\") pod \"certified-operators-xbcs7\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.179097 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47bxj\" (UniqueName: \"kubernetes.io/projected/f6c999e5-5263-4f04-9548-0f7a217a7bde-kube-api-access-47bxj\") pod \"certified-operators-xbcs7\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.205193 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.736967 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbcs7"] Jan 23 18:28:45 crc kubenswrapper[4760]: I0123 18:28:45.799298 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbcs7" event={"ID":"f6c999e5-5263-4f04-9548-0f7a217a7bde","Type":"ContainerStarted","Data":"d4638870edc5179e4b5aef66024c0b74cf8c7836701053f3e912e32aab1fd44d"} Jan 23 18:28:46 crc kubenswrapper[4760]: I0123 18:28:46.818556 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerID="b5d5f45467d17ef0dcb21eea6234ca8f902fec4e0a6c1d7bcc57e77b66119093" exitCode=0 Jan 23 18:28:46 crc kubenswrapper[4760]: I0123 18:28:46.818649 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbcs7" event={"ID":"f6c999e5-5263-4f04-9548-0f7a217a7bde","Type":"ContainerDied","Data":"b5d5f45467d17ef0dcb21eea6234ca8f902fec4e0a6c1d7bcc57e77b66119093"} Jan 23 18:28:48 crc kubenswrapper[4760]: I0123 18:28:48.835882 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerID="4296c7a6d8117ee9480eb121e49c507103eb611ba6bdedd0f7bfb1f11b882195" exitCode=0 Jan 23 18:28:48 crc kubenswrapper[4760]: I0123 18:28:48.835929 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbcs7" event={"ID":"f6c999e5-5263-4f04-9548-0f7a217a7bde","Type":"ContainerDied","Data":"4296c7a6d8117ee9480eb121e49c507103eb611ba6bdedd0f7bfb1f11b882195"} Jan 23 18:28:48 crc kubenswrapper[4760]: I0123 18:28:48.839144 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:28:49 crc kubenswrapper[4760]: I0123 18:28:49.847806 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbcs7" event={"ID":"f6c999e5-5263-4f04-9548-0f7a217a7bde","Type":"ContainerStarted","Data":"d951bc57e01041a3162ecad6584584f17bfb51db79edb7f4b311d20797266bb7"} Jan 23 18:28:49 crc kubenswrapper[4760]: I0123 18:28:49.882004 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xbcs7" podStartSLOduration=3.2698904779999998 podStartE2EDuration="5.881794593s" podCreationTimestamp="2026-01-23 18:28:44 +0000 UTC" firstStartedPulling="2026-01-23 18:28:46.821901197 +0000 UTC m=+1669.824359150" lastFinishedPulling="2026-01-23 18:28:49.433805332 +0000 UTC m=+1672.436263265" observedRunningTime="2026-01-23 18:28:49.863882744 +0000 UTC m=+1672.866340677" watchObservedRunningTime="2026-01-23 18:28:49.881794593 +0000 UTC m=+1672.884252526" Jan 23 18:28:54 crc kubenswrapper[4760]: I0123 18:28:54.595266 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:28:54 crc kubenswrapper[4760]: E0123 18:28:54.597041 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:28:55 crc kubenswrapper[4760]: I0123 18:28:55.206206 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:55 crc kubenswrapper[4760]: I0123 18:28:55.206563 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:55 crc kubenswrapper[4760]: I0123 18:28:55.268621 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:55 crc kubenswrapper[4760]: I0123 18:28:55.976643 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:56 crc kubenswrapper[4760]: I0123 18:28:56.041724 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xbcs7"] Jan 23 18:28:56 crc kubenswrapper[4760]: I0123 18:28:56.916944 4760 generic.go:334] "Generic (PLEG): container finished" podID="0191ba0e-f1c2-4a80-ae05-bc968ba09aec" containerID="06e333271640d2ad0336195dbc2c05bf50e5cb899716fce4146a49b4993599f9" exitCode=0 Jan 23 18:28:56 crc kubenswrapper[4760]: I0123 18:28:56.917056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" event={"ID":"0191ba0e-f1c2-4a80-ae05-bc968ba09aec","Type":"ContainerDied","Data":"06e333271640d2ad0336195dbc2c05bf50e5cb899716fce4146a49b4993599f9"} Jan 23 18:28:57 crc kubenswrapper[4760]: I0123 18:28:57.938774 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xbcs7" podUID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerName="registry-server" containerID="cri-o://d951bc57e01041a3162ecad6584584f17bfb51db79edb7f4b311d20797266bb7" gracePeriod=2 Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.346614 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.361509 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-inventory\") pod \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.361754 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-bootstrap-combined-ca-bundle\") pod \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.361780 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6n5m\" (UniqueName: \"kubernetes.io/projected/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-kube-api-access-f6n5m\") pod \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.361858 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-ssh-key-openstack-edpm-ipam\") pod \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\" (UID: \"0191ba0e-f1c2-4a80-ae05-bc968ba09aec\") " Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.368002 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-kube-api-access-f6n5m" (OuterVolumeSpecName: "kube-api-access-f6n5m") pod "0191ba0e-f1c2-4a80-ae05-bc968ba09aec" (UID: "0191ba0e-f1c2-4a80-ae05-bc968ba09aec"). InnerVolumeSpecName "kube-api-access-f6n5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.370622 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0191ba0e-f1c2-4a80-ae05-bc968ba09aec" (UID: "0191ba0e-f1c2-4a80-ae05-bc968ba09aec"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.393940 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0191ba0e-f1c2-4a80-ae05-bc968ba09aec" (UID: "0191ba0e-f1c2-4a80-ae05-bc968ba09aec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.410221 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-inventory" (OuterVolumeSpecName: "inventory") pod "0191ba0e-f1c2-4a80-ae05-bc968ba09aec" (UID: "0191ba0e-f1c2-4a80-ae05-bc968ba09aec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.464137 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6n5m\" (UniqueName: \"kubernetes.io/projected/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-kube-api-access-f6n5m\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.464190 4760 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.464204 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.464219 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0191ba0e-f1c2-4a80-ae05-bc968ba09aec-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.950503 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.950585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4" event={"ID":"0191ba0e-f1c2-4a80-ae05-bc968ba09aec","Type":"ContainerDied","Data":"667f734f6e3334b6505e409f05338cefd95b2adcc953c8e0962231be1a6ede78"} Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.950965 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="667f734f6e3334b6505e409f05338cefd95b2adcc953c8e0962231be1a6ede78" Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.954061 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerID="d951bc57e01041a3162ecad6584584f17bfb51db79edb7f4b311d20797266bb7" exitCode=0 Jan 23 18:28:58 crc kubenswrapper[4760]: I0123 18:28:58.954112 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbcs7" event={"ID":"f6c999e5-5263-4f04-9548-0f7a217a7bde","Type":"ContainerDied","Data":"d951bc57e01041a3162ecad6584584f17bfb51db79edb7f4b311d20797266bb7"} Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.041769 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz"] Jan 23 18:28:59 crc kubenswrapper[4760]: E0123 18:28:59.042153 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0191ba0e-f1c2-4a80-ae05-bc968ba09aec" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.042174 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0191ba0e-f1c2-4a80-ae05-bc968ba09aec" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.042362 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0191ba0e-f1c2-4a80-ae05-bc968ba09aec" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.043007 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.045108 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.045774 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.045958 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.047707 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.068339 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz"] Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.076546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.076607 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8plsb\" (UniqueName: \"kubernetes.io/projected/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-kube-api-access-8plsb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.076659 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.096169 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.177748 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-utilities\") pod \"f6c999e5-5263-4f04-9548-0f7a217a7bde\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.177873 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-catalog-content\") pod \"f6c999e5-5263-4f04-9548-0f7a217a7bde\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.177897 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47bxj\" (UniqueName: \"kubernetes.io/projected/f6c999e5-5263-4f04-9548-0f7a217a7bde-kube-api-access-47bxj\") pod \"f6c999e5-5263-4f04-9548-0f7a217a7bde\" (UID: \"f6c999e5-5263-4f04-9548-0f7a217a7bde\") " Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.178116 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.178151 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8plsb\" (UniqueName: \"kubernetes.io/projected/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-kube-api-access-8plsb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.178198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.178786 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-utilities" (OuterVolumeSpecName: "utilities") pod "f6c999e5-5263-4f04-9548-0f7a217a7bde" (UID: "f6c999e5-5263-4f04-9548-0f7a217a7bde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.182137 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c999e5-5263-4f04-9548-0f7a217a7bde-kube-api-access-47bxj" (OuterVolumeSpecName: "kube-api-access-47bxj") pod "f6c999e5-5263-4f04-9548-0f7a217a7bde" (UID: "f6c999e5-5263-4f04-9548-0f7a217a7bde"). InnerVolumeSpecName "kube-api-access-47bxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.182819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.182886 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.199040 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8plsb\" (UniqueName: \"kubernetes.io/projected/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-kube-api-access-8plsb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.245788 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6c999e5-5263-4f04-9548-0f7a217a7bde" (UID: "f6c999e5-5263-4f04-9548-0f7a217a7bde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.280592 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.280637 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47bxj\" (UniqueName: \"kubernetes.io/projected/f6c999e5-5263-4f04-9548-0f7a217a7bde-kube-api-access-47bxj\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.280656 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c999e5-5263-4f04-9548-0f7a217a7bde-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.409784 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.965032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbcs7" event={"ID":"f6c999e5-5263-4f04-9548-0f7a217a7bde","Type":"ContainerDied","Data":"d4638870edc5179e4b5aef66024c0b74cf8c7836701053f3e912e32aab1fd44d"} Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.965090 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbcs7" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.965374 4760 scope.go:117] "RemoveContainer" containerID="d951bc57e01041a3162ecad6584584f17bfb51db79edb7f4b311d20797266bb7" Jan 23 18:28:59 crc kubenswrapper[4760]: I0123 18:28:59.978281 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz"] Jan 23 18:28:59 crc kubenswrapper[4760]: W0123 18:28:59.992703 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe3acfff_ecc0_44c9_8ee9_d936ce9316e0.slice/crio-2fa037af988c86be8fdac7d4c08a4522c0cf867361b31a324320ed618464df19 WatchSource:0}: Error finding container 2fa037af988c86be8fdac7d4c08a4522c0cf867361b31a324320ed618464df19: Status 404 returned error can't find the container with id 2fa037af988c86be8fdac7d4c08a4522c0cf867361b31a324320ed618464df19 Jan 23 18:29:00 crc kubenswrapper[4760]: I0123 18:29:00.004362 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xbcs7"] Jan 23 18:29:00 crc kubenswrapper[4760]: I0123 18:29:00.014486 4760 scope.go:117] "RemoveContainer" containerID="4296c7a6d8117ee9480eb121e49c507103eb611ba6bdedd0f7bfb1f11b882195" Jan 23 18:29:00 crc kubenswrapper[4760]: I0123 18:29:00.018257 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xbcs7"] Jan 23 18:29:00 crc kubenswrapper[4760]: I0123 18:29:00.044751 4760 scope.go:117] "RemoveContainer" containerID="b5d5f45467d17ef0dcb21eea6234ca8f902fec4e0a6c1d7bcc57e77b66119093" Jan 23 18:29:00 crc kubenswrapper[4760]: I0123 18:29:00.976027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" event={"ID":"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0","Type":"ContainerStarted","Data":"867aea2433423e902e1ceb0b09fbda30c4eb6c7b18b43c72871fa55e4372ae83"} Jan 23 18:29:00 crc kubenswrapper[4760]: I0123 18:29:00.976359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" event={"ID":"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0","Type":"ContainerStarted","Data":"2fa037af988c86be8fdac7d4c08a4522c0cf867361b31a324320ed618464df19"} Jan 23 18:29:01 crc kubenswrapper[4760]: I0123 18:29:01.016030 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" podStartSLOduration=1.5832028839999999 podStartE2EDuration="2.016008519s" podCreationTimestamp="2026-01-23 18:28:59 +0000 UTC" firstStartedPulling="2026-01-23 18:29:00.014286342 +0000 UTC m=+1683.016744295" lastFinishedPulling="2026-01-23 18:29:00.447091967 +0000 UTC m=+1683.449549930" observedRunningTime="2026-01-23 18:29:01.008002725 +0000 UTC m=+1684.010460698" watchObservedRunningTime="2026-01-23 18:29:01.016008519 +0000 UTC m=+1684.018466492" Jan 23 18:29:01 crc kubenswrapper[4760]: I0123 18:29:01.612883 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c999e5-5263-4f04-9548-0f7a217a7bde" path="/var/lib/kubelet/pods/f6c999e5-5263-4f04-9548-0f7a217a7bde/volumes" Jan 23 18:29:06 crc kubenswrapper[4760]: I0123 18:29:06.595257 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:29:06 crc kubenswrapper[4760]: E0123 18:29:06.596157 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:29:18 crc kubenswrapper[4760]: I0123 18:29:18.595240 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:29:18 crc kubenswrapper[4760]: E0123 18:29:18.596053 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:29:31 crc kubenswrapper[4760]: I0123 18:29:31.594638 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:29:31 crc kubenswrapper[4760]: E0123 18:29:31.595489 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:29:42 crc kubenswrapper[4760]: I0123 18:29:42.595250 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:29:42 crc kubenswrapper[4760]: E0123 18:29:42.596227 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:29:55 crc kubenswrapper[4760]: I0123 18:29:55.595670 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:29:55 crc kubenswrapper[4760]: E0123 18:29:55.596531 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:29:59 crc kubenswrapper[4760]: I0123 18:29:59.414275 4760 scope.go:117] "RemoveContainer" containerID="97348cc38a5e3478fe29c8796db3fa81598a7f3cec13839426219f164ffeea70" Jan 23 18:29:59 crc kubenswrapper[4760]: I0123 18:29:59.448697 4760 scope.go:117] "RemoveContainer" containerID="36f69e732b479b2b813f5ee229824924d7073cfd02d602339522d8bbe8e97ab3" Jan 23 18:29:59 crc kubenswrapper[4760]: I0123 18:29:59.465895 4760 scope.go:117] "RemoveContainer" containerID="dbb4bece72ac8859809c295af3a274051db52b5799e6d36c33234013b12f5d21" Jan 23 18:29:59 crc kubenswrapper[4760]: I0123 18:29:59.486193 4760 scope.go:117] "RemoveContainer" containerID="f63f084670e68826eb62edf1a3baffdfbc6b2faa7d6601b6fe738dbb3a0a67ce" Jan 23 18:29:59 crc kubenswrapper[4760]: I0123 18:29:59.502858 4760 scope.go:117] "RemoveContainer" containerID="d9f0dd2c8e1616599e80c3d25c4de43a6563484c008329d8f247eb3765a43945" Jan 23 18:29:59 crc kubenswrapper[4760]: I0123 18:29:59.530461 4760 scope.go:117] "RemoveContainer" containerID="5ec79fda168453a382443eef65ebb4564b11c84026c33b2d6f23b9fbdbcec092" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.155235 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs"] Jan 23 18:30:00 crc kubenswrapper[4760]: E0123 18:30:00.155624 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerName="extract-content" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.155637 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerName="extract-content" Jan 23 18:30:00 crc kubenswrapper[4760]: E0123 18:30:00.155659 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerName="extract-utilities" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.155668 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerName="extract-utilities" Jan 23 18:30:00 crc kubenswrapper[4760]: E0123 18:30:00.155681 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerName="registry-server" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.155687 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerName="registry-server" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.155862 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c999e5-5263-4f04-9548-0f7a217a7bde" containerName="registry-server" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.156445 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.159200 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.167955 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs"] Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.205574 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.254264 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4387fcd5-317a-4080-ab95-ef7a8c15fe51-config-volume\") pod \"collect-profiles-29486550-dztzs\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.254319 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4387fcd5-317a-4080-ab95-ef7a8c15fe51-secret-volume\") pod \"collect-profiles-29486550-dztzs\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.254396 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm8p6\" (UniqueName: \"kubernetes.io/projected/4387fcd5-317a-4080-ab95-ef7a8c15fe51-kube-api-access-rm8p6\") pod \"collect-profiles-29486550-dztzs\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.355769 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm8p6\" (UniqueName: \"kubernetes.io/projected/4387fcd5-317a-4080-ab95-ef7a8c15fe51-kube-api-access-rm8p6\") pod \"collect-profiles-29486550-dztzs\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.355884 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4387fcd5-317a-4080-ab95-ef7a8c15fe51-config-volume\") pod \"collect-profiles-29486550-dztzs\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.355935 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4387fcd5-317a-4080-ab95-ef7a8c15fe51-secret-volume\") pod \"collect-profiles-29486550-dztzs\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.356837 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4387fcd5-317a-4080-ab95-ef7a8c15fe51-config-volume\") pod \"collect-profiles-29486550-dztzs\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.365916 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4387fcd5-317a-4080-ab95-ef7a8c15fe51-secret-volume\") pod \"collect-profiles-29486550-dztzs\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.373590 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm8p6\" (UniqueName: \"kubernetes.io/projected/4387fcd5-317a-4080-ab95-ef7a8c15fe51-kube-api-access-rm8p6\") pod \"collect-profiles-29486550-dztzs\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.534044 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:00 crc kubenswrapper[4760]: W0123 18:30:00.985767 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4387fcd5_317a_4080_ab95_ef7a8c15fe51.slice/crio-fc361df2fe9bb921788fdee5519c9141a5f5548fa2f296e1dcfa13e8ca9e0a06 WatchSource:0}: Error finding container fc361df2fe9bb921788fdee5519c9141a5f5548fa2f296e1dcfa13e8ca9e0a06: Status 404 returned error can't find the container with id fc361df2fe9bb921788fdee5519c9141a5f5548fa2f296e1dcfa13e8ca9e0a06 Jan 23 18:30:00 crc kubenswrapper[4760]: I0123 18:30:00.988263 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs"] Jan 23 18:30:01 crc kubenswrapper[4760]: I0123 18:30:01.546305 4760 generic.go:334] "Generic (PLEG): container finished" podID="4387fcd5-317a-4080-ab95-ef7a8c15fe51" containerID="8a45a87c0ebf2b8362871de646d086107968ef47626638d2ad8591d834e766ca" exitCode=0 Jan 23 18:30:01 crc kubenswrapper[4760]: I0123 18:30:01.546355 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" event={"ID":"4387fcd5-317a-4080-ab95-ef7a8c15fe51","Type":"ContainerDied","Data":"8a45a87c0ebf2b8362871de646d086107968ef47626638d2ad8591d834e766ca"} Jan 23 18:30:01 crc kubenswrapper[4760]: I0123 18:30:01.546382 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" event={"ID":"4387fcd5-317a-4080-ab95-ef7a8c15fe51","Type":"ContainerStarted","Data":"fc361df2fe9bb921788fdee5519c9141a5f5548fa2f296e1dcfa13e8ca9e0a06"} Jan 23 18:30:02 crc kubenswrapper[4760]: I0123 18:30:02.915473 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.016324 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4387fcd5-317a-4080-ab95-ef7a8c15fe51-config-volume\") pod \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.016792 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4387fcd5-317a-4080-ab95-ef7a8c15fe51-secret-volume\") pod \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.016836 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm8p6\" (UniqueName: \"kubernetes.io/projected/4387fcd5-317a-4080-ab95-ef7a8c15fe51-kube-api-access-rm8p6\") pod \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\" (UID: \"4387fcd5-317a-4080-ab95-ef7a8c15fe51\") " Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.017693 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4387fcd5-317a-4080-ab95-ef7a8c15fe51-config-volume" (OuterVolumeSpecName: "config-volume") pod "4387fcd5-317a-4080-ab95-ef7a8c15fe51" (UID: "4387fcd5-317a-4080-ab95-ef7a8c15fe51"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.022876 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4387fcd5-317a-4080-ab95-ef7a8c15fe51-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4387fcd5-317a-4080-ab95-ef7a8c15fe51" (UID: "4387fcd5-317a-4080-ab95-ef7a8c15fe51"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.025045 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4387fcd5-317a-4080-ab95-ef7a8c15fe51-kube-api-access-rm8p6" (OuterVolumeSpecName: "kube-api-access-rm8p6") pod "4387fcd5-317a-4080-ab95-ef7a8c15fe51" (UID: "4387fcd5-317a-4080-ab95-ef7a8c15fe51"). InnerVolumeSpecName "kube-api-access-rm8p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.120584 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm8p6\" (UniqueName: \"kubernetes.io/projected/4387fcd5-317a-4080-ab95-ef7a8c15fe51-kube-api-access-rm8p6\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.120648 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4387fcd5-317a-4080-ab95-ef7a8c15fe51-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.120663 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4387fcd5-317a-4080-ab95-ef7a8c15fe51-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.566337 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" event={"ID":"4387fcd5-317a-4080-ab95-ef7a8c15fe51","Type":"ContainerDied","Data":"fc361df2fe9bb921788fdee5519c9141a5f5548fa2f296e1dcfa13e8ca9e0a06"} Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.566376 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc361df2fe9bb921788fdee5519c9141a5f5548fa2f296e1dcfa13e8ca9e0a06" Jan 23 18:30:03 crc kubenswrapper[4760]: I0123 18:30:03.566476 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs" Jan 23 18:30:06 crc kubenswrapper[4760]: I0123 18:30:06.596490 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:30:06 crc kubenswrapper[4760]: E0123 18:30:06.597311 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:30:08 crc kubenswrapper[4760]: I0123 18:30:08.052661 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2lmmb"] Jan 23 18:30:08 crc kubenswrapper[4760]: I0123 18:30:08.064637 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4388-account-create-update-lfh6r"] Jan 23 18:30:08 crc kubenswrapper[4760]: I0123 18:30:08.081011 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c9ab-account-create-update-6hjgj"] Jan 23 18:30:08 crc kubenswrapper[4760]: I0123 18:30:08.089842 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4388-account-create-update-lfh6r"] Jan 23 18:30:08 crc kubenswrapper[4760]: I0123 18:30:08.096215 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2lmmb"] Jan 23 18:30:08 crc kubenswrapper[4760]: I0123 18:30:08.104348 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c9ab-account-create-update-6hjgj"] Jan 23 18:30:09 crc kubenswrapper[4760]: I0123 18:30:09.611407 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4e759f-76d3-44b0-ad03-134407e85cd5" path="/var/lib/kubelet/pods/0f4e759f-76d3-44b0-ad03-134407e85cd5/volumes" Jan 23 18:30:09 crc kubenswrapper[4760]: I0123 18:30:09.612223 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f41ec84-d27d-4f53-b556-347cd55c7fd4" path="/var/lib/kubelet/pods/1f41ec84-d27d-4f53-b556-347cd55c7fd4/volumes" Jan 23 18:30:09 crc kubenswrapper[4760]: I0123 18:30:09.612790 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb4d716-63d1-46ae-8134-3ba82ec34339" path="/var/lib/kubelet/pods/9cb4d716-63d1-46ae-8134-3ba82ec34339/volumes" Jan 23 18:30:12 crc kubenswrapper[4760]: I0123 18:30:12.671333 4760 generic.go:334] "Generic (PLEG): container finished" podID="fe3acfff-ecc0-44c9-8ee9-d936ce9316e0" containerID="867aea2433423e902e1ceb0b09fbda30c4eb6c7b18b43c72871fa55e4372ae83" exitCode=0 Jan 23 18:30:12 crc kubenswrapper[4760]: I0123 18:30:12.671432 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" event={"ID":"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0","Type":"ContainerDied","Data":"867aea2433423e902e1ceb0b09fbda30c4eb6c7b18b43c72871fa55e4372ae83"} Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.061327 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-lqcnv"] Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.070841 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8c35-account-create-update-wrntz"] Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.082613 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-lqcnv"] Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.091238 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8c35-account-create-update-wrntz"] Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.107439 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.244279 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-ssh-key-openstack-edpm-ipam\") pod \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.244363 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-inventory\") pod \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.244561 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8plsb\" (UniqueName: \"kubernetes.io/projected/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-kube-api-access-8plsb\") pod \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\" (UID: \"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0\") " Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.249796 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-kube-api-access-8plsb" (OuterVolumeSpecName: "kube-api-access-8plsb") pod "fe3acfff-ecc0-44c9-8ee9-d936ce9316e0" (UID: "fe3acfff-ecc0-44c9-8ee9-d936ce9316e0"). InnerVolumeSpecName "kube-api-access-8plsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.271562 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fe3acfff-ecc0-44c9-8ee9-d936ce9316e0" (UID: "fe3acfff-ecc0-44c9-8ee9-d936ce9316e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.271970 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-inventory" (OuterVolumeSpecName: "inventory") pod "fe3acfff-ecc0-44c9-8ee9-d936ce9316e0" (UID: "fe3acfff-ecc0-44c9-8ee9-d936ce9316e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.346348 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8plsb\" (UniqueName: \"kubernetes.io/projected/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-kube-api-access-8plsb\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.346673 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.346772 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.695921 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" event={"ID":"fe3acfff-ecc0-44c9-8ee9-d936ce9316e0","Type":"ContainerDied","Data":"2fa037af988c86be8fdac7d4c08a4522c0cf867361b31a324320ed618464df19"} Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.695975 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.695986 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa037af988c86be8fdac7d4c08a4522c0cf867361b31a324320ed618464df19" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.832556 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px"] Jan 23 18:30:14 crc kubenswrapper[4760]: E0123 18:30:14.833085 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3acfff-ecc0-44c9-8ee9-d936ce9316e0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.833107 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3acfff-ecc0-44c9-8ee9-d936ce9316e0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:30:14 crc kubenswrapper[4760]: E0123 18:30:14.833154 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4387fcd5-317a-4080-ab95-ef7a8c15fe51" containerName="collect-profiles" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.833162 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4387fcd5-317a-4080-ab95-ef7a8c15fe51" containerName="collect-profiles" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.833359 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3acfff-ecc0-44c9-8ee9-d936ce9316e0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.833378 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4387fcd5-317a-4080-ab95-ef7a8c15fe51" containerName="collect-profiles" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.834162 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.836541 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.836954 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.847171 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.847489 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.849847 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px"] Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.956790 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g48px\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.957352 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g48px\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:14 crc kubenswrapper[4760]: I0123 18:30:14.957652 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xgpb\" (UniqueName: \"kubernetes.io/projected/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-kube-api-access-6xgpb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g48px\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:15 crc kubenswrapper[4760]: I0123 18:30:15.060544 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xgpb\" (UniqueName: \"kubernetes.io/projected/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-kube-api-access-6xgpb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g48px\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:15 crc kubenswrapper[4760]: I0123 18:30:15.060697 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g48px\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:15 crc kubenswrapper[4760]: I0123 18:30:15.060741 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g48px\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:15 crc kubenswrapper[4760]: I0123 18:30:15.065659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g48px\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:15 crc kubenswrapper[4760]: I0123 18:30:15.066054 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g48px\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:15 crc kubenswrapper[4760]: I0123 18:30:15.092196 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xgpb\" (UniqueName: \"kubernetes.io/projected/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-kube-api-access-6xgpb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g48px\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:15 crc kubenswrapper[4760]: I0123 18:30:15.212704 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:15 crc kubenswrapper[4760]: I0123 18:30:15.607724 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f13de4-d7c3-4501-9f08-c98dcaac24c7" path="/var/lib/kubelet/pods/14f13de4-d7c3-4501-9f08-c98dcaac24c7/volumes" Jan 23 18:30:15 crc kubenswrapper[4760]: I0123 18:30:15.609157 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e438fd-20d8-4fd5-966b-7166b9dc8fed" path="/var/lib/kubelet/pods/a4e438fd-20d8-4fd5-966b-7166b9dc8fed/volumes" Jan 23 18:30:15 crc kubenswrapper[4760]: I0123 18:30:15.731454 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px"] Jan 23 18:30:16 crc kubenswrapper[4760]: I0123 18:30:16.713532 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" event={"ID":"c93ac1af-5c71-44cd-ab7b-92a3d90080ce","Type":"ContainerStarted","Data":"24796fcebcc5f26a25bc4bf4164807df80f0523ab496f1d8951e505e1b30bf94"} Jan 23 18:30:16 crc kubenswrapper[4760]: I0123 18:30:16.713859 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" event={"ID":"c93ac1af-5c71-44cd-ab7b-92a3d90080ce","Type":"ContainerStarted","Data":"c24891418df6b7246d8fd1038e032801521e9364876a6190f8127ceb38a9793d"} Jan 23 18:30:16 crc kubenswrapper[4760]: I0123 18:30:16.918576 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" podStartSLOduration=2.47589615 podStartE2EDuration="2.918559009s" podCreationTimestamp="2026-01-23 18:30:14 +0000 UTC" firstStartedPulling="2026-01-23 18:30:15.745867462 +0000 UTC m=+1758.748325405" lastFinishedPulling="2026-01-23 18:30:16.188530331 +0000 UTC m=+1759.190988264" observedRunningTime="2026-01-23 18:30:16.91222255 +0000 UTC m=+1759.914680483" watchObservedRunningTime="2026-01-23 18:30:16.918559009 +0000 UTC m=+1759.921016952" Jan 23 18:30:18 crc kubenswrapper[4760]: I0123 18:30:18.040896 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-cgcqh"] Jan 23 18:30:18 crc kubenswrapper[4760]: I0123 18:30:18.050954 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-cgcqh"] Jan 23 18:30:19 crc kubenswrapper[4760]: I0123 18:30:19.610828 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="653cbf70-380b-4f51-9ec9-4338348956ee" path="/var/lib/kubelet/pods/653cbf70-380b-4f51-9ec9-4338348956ee/volumes" Jan 23 18:30:21 crc kubenswrapper[4760]: I0123 18:30:21.595569 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:30:21 crc kubenswrapper[4760]: E0123 18:30:21.596179 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:30:21 crc kubenswrapper[4760]: I0123 18:30:21.759180 4760 generic.go:334] "Generic (PLEG): container finished" podID="c93ac1af-5c71-44cd-ab7b-92a3d90080ce" containerID="24796fcebcc5f26a25bc4bf4164807df80f0523ab496f1d8951e505e1b30bf94" exitCode=0 Jan 23 18:30:21 crc kubenswrapper[4760]: I0123 18:30:21.759252 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" event={"ID":"c93ac1af-5c71-44cd-ab7b-92a3d90080ce","Type":"ContainerDied","Data":"24796fcebcc5f26a25bc4bf4164807df80f0523ab496f1d8951e505e1b30bf94"} Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.246099 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.412243 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-ssh-key-openstack-edpm-ipam\") pod \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.412355 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xgpb\" (UniqueName: \"kubernetes.io/projected/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-kube-api-access-6xgpb\") pod \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.412602 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-inventory\") pod \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\" (UID: \"c93ac1af-5c71-44cd-ab7b-92a3d90080ce\") " Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.424744 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-kube-api-access-6xgpb" (OuterVolumeSpecName: "kube-api-access-6xgpb") pod "c93ac1af-5c71-44cd-ab7b-92a3d90080ce" (UID: "c93ac1af-5c71-44cd-ab7b-92a3d90080ce"). InnerVolumeSpecName "kube-api-access-6xgpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.443681 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c93ac1af-5c71-44cd-ab7b-92a3d90080ce" (UID: "c93ac1af-5c71-44cd-ab7b-92a3d90080ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.458155 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-inventory" (OuterVolumeSpecName: "inventory") pod "c93ac1af-5c71-44cd-ab7b-92a3d90080ce" (UID: "c93ac1af-5c71-44cd-ab7b-92a3d90080ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.515682 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.515724 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.515741 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xgpb\" (UniqueName: \"kubernetes.io/projected/c93ac1af-5c71-44cd-ab7b-92a3d90080ce-kube-api-access-6xgpb\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.807976 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" event={"ID":"c93ac1af-5c71-44cd-ab7b-92a3d90080ce","Type":"ContainerDied","Data":"c24891418df6b7246d8fd1038e032801521e9364876a6190f8127ceb38a9793d"} Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.808037 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c24891418df6b7246d8fd1038e032801521e9364876a6190f8127ceb38a9793d" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.808134 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.894401 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2"] Jan 23 18:30:23 crc kubenswrapper[4760]: E0123 18:30:23.894769 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93ac1af-5c71-44cd-ab7b-92a3d90080ce" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.894788 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93ac1af-5c71-44cd-ab7b-92a3d90080ce" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.894985 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c93ac1af-5c71-44cd-ab7b-92a3d90080ce" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.895623 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.900481 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.900518 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.901072 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.901107 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:30:23 crc kubenswrapper[4760]: I0123 18:30:23.921369 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2"] Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.025495 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pdjw2\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.025570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55jf\" (UniqueName: \"kubernetes.io/projected/90442552-7906-4f81-917e-ea963be59436-kube-api-access-b55jf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pdjw2\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.025916 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pdjw2\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.127336 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pdjw2\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.127785 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pdjw2\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.128030 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b55jf\" (UniqueName: \"kubernetes.io/projected/90442552-7906-4f81-917e-ea963be59436-kube-api-access-b55jf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pdjw2\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.134746 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pdjw2\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.142807 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pdjw2\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.150718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b55jf\" (UniqueName: \"kubernetes.io/projected/90442552-7906-4f81-917e-ea963be59436-kube-api-access-b55jf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pdjw2\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.264009 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:30:24 crc kubenswrapper[4760]: I0123 18:30:24.847855 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2"] Jan 23 18:30:25 crc kubenswrapper[4760]: I0123 18:30:25.825458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" event={"ID":"90442552-7906-4f81-917e-ea963be59436","Type":"ContainerStarted","Data":"aedff0aa8d09230ada9cce20ad2fef96a35053a19dbbb516a87a5fb2a1a5d067"} Jan 23 18:30:25 crc kubenswrapper[4760]: I0123 18:30:25.825795 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" event={"ID":"90442552-7906-4f81-917e-ea963be59436","Type":"ContainerStarted","Data":"472fb3ba6dd58786652e519d1d3edcb401ef3e3e5e981c169a7efecfa2a77f81"} Jan 23 18:30:25 crc kubenswrapper[4760]: I0123 18:30:25.849714 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" podStartSLOduration=2.320755179 podStartE2EDuration="2.849688044s" podCreationTimestamp="2026-01-23 18:30:23 +0000 UTC" firstStartedPulling="2026-01-23 18:30:24.864027765 +0000 UTC m=+1767.866485698" lastFinishedPulling="2026-01-23 18:30:25.39296062 +0000 UTC m=+1768.395418563" observedRunningTime="2026-01-23 18:30:25.84206672 +0000 UTC m=+1768.844524653" watchObservedRunningTime="2026-01-23 18:30:25.849688044 +0000 UTC m=+1768.852145977" Jan 23 18:30:34 crc kubenswrapper[4760]: I0123 18:30:34.595827 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:30:34 crc kubenswrapper[4760]: E0123 18:30:34.597251 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:30:39 crc kubenswrapper[4760]: I0123 18:30:39.039125 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-p4m6p"] Jan 23 18:30:39 crc kubenswrapper[4760]: I0123 18:30:39.051808 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-p4m6p"] Jan 23 18:30:39 crc kubenswrapper[4760]: I0123 18:30:39.607907 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75c11195-65a7-41d7-857c-15a8962cd2e3" path="/var/lib/kubelet/pods/75c11195-65a7-41d7-857c-15a8962cd2e3/volumes" Jan 23 18:30:42 crc kubenswrapper[4760]: I0123 18:30:42.037526 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d5x4m"] Jan 23 18:30:42 crc kubenswrapper[4760]: I0123 18:30:42.046621 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d5x4m"] Jan 23 18:30:43 crc kubenswrapper[4760]: I0123 18:30:43.606570 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881750bc-c90b-4964-ae2a-9325359893cf" path="/var/lib/kubelet/pods/881750bc-c90b-4964-ae2a-9325359893cf/volumes" Jan 23 18:30:48 crc kubenswrapper[4760]: I0123 18:30:48.039714 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f7e0-account-create-update-9wvbl"] Jan 23 18:30:48 crc kubenswrapper[4760]: I0123 18:30:48.053008 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-301c-account-create-update-hfbkp"] Jan 23 18:30:48 crc kubenswrapper[4760]: I0123 18:30:48.060956 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rdpkb"] Jan 23 18:30:48 crc kubenswrapper[4760]: I0123 18:30:48.068463 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-301c-account-create-update-hfbkp"] Jan 23 18:30:48 crc kubenswrapper[4760]: I0123 18:30:48.075259 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rdpkb"] Jan 23 18:30:48 crc kubenswrapper[4760]: I0123 18:30:48.082134 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f7e0-account-create-update-9wvbl"] Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.038324 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5pl85"] Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.050449 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0a4f-account-create-update-5htpx"] Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.061094 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5pl85"] Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.067977 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-444kz"] Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.074799 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0a4f-account-create-update-5htpx"] Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.083605 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-444kz"] Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.595944 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:30:49 crc kubenswrapper[4760]: E0123 18:30:49.596185 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.613103 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d70bae9-29c6-4259-98cc-6398f6b472a9" path="/var/lib/kubelet/pods/3d70bae9-29c6-4259-98cc-6398f6b472a9/volumes" Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.614538 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c89e48-2282-4867-ac66-6eff2f352646" path="/var/lib/kubelet/pods/41c89e48-2282-4867-ac66-6eff2f352646/volumes" Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.615795 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="771accfa-3cd3-46ff-8b06-4ffa90c42a6b" path="/var/lib/kubelet/pods/771accfa-3cd3-46ff-8b06-4ffa90c42a6b/volumes" Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.617076 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbff3f04-bb64-4735-bbaa-ea70fcb6f4de" path="/var/lib/kubelet/pods/cbff3f04-bb64-4735-bbaa-ea70fcb6f4de/volumes" Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.619286 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddae024a-2888-40f3-954d-f0da9731f77d" path="/var/lib/kubelet/pods/ddae024a-2888-40f3-954d-f0da9731f77d/volumes" Jan 23 18:30:49 crc kubenswrapper[4760]: I0123 18:30:49.620460 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f74cdf76-d5cc-404a-a5d8-e6e3a3add887" path="/var/lib/kubelet/pods/f74cdf76-d5cc-404a-a5d8-e6e3a3add887/volumes" Jan 23 18:30:52 crc kubenswrapper[4760]: I0123 18:30:52.940102 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bc7mv"] Jan 23 18:30:52 crc kubenswrapper[4760]: I0123 18:30:52.942971 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:52 crc kubenswrapper[4760]: I0123 18:30:52.971567 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bc7mv"] Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.027618 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vrwb\" (UniqueName: \"kubernetes.io/projected/2c1da30d-920b-4ea8-a627-10302870e933-kube-api-access-9vrwb\") pod \"community-operators-bc7mv\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.027733 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-utilities\") pod \"community-operators-bc7mv\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.027786 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-catalog-content\") pod \"community-operators-bc7mv\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.033336 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-htdjc"] Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.040382 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-htdjc"] Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.128594 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vrwb\" (UniqueName: \"kubernetes.io/projected/2c1da30d-920b-4ea8-a627-10302870e933-kube-api-access-9vrwb\") pod \"community-operators-bc7mv\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.129234 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-utilities\") pod \"community-operators-bc7mv\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.129434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-catalog-content\") pod \"community-operators-bc7mv\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.129667 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-utilities\") pod \"community-operators-bc7mv\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.130031 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-catalog-content\") pod \"community-operators-bc7mv\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.156390 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vrwb\" (UniqueName: \"kubernetes.io/projected/2c1da30d-920b-4ea8-a627-10302870e933-kube-api-access-9vrwb\") pod \"community-operators-bc7mv\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.265700 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.610441 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668368d8-8de8-44fa-bf3b-79308dd8e44b" path="/var/lib/kubelet/pods/668368d8-8de8-44fa-bf3b-79308dd8e44b/volumes" Jan 23 18:30:53 crc kubenswrapper[4760]: I0123 18:30:53.823305 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bc7mv"] Jan 23 18:30:54 crc kubenswrapper[4760]: I0123 18:30:54.077270 4760 generic.go:334] "Generic (PLEG): container finished" podID="2c1da30d-920b-4ea8-a627-10302870e933" containerID="b2e9af2cd3699b54ff785f9b5cb0490e8fb2ab28159c0541a3d7a678df1169dd" exitCode=0 Jan 23 18:30:54 crc kubenswrapper[4760]: I0123 18:30:54.077402 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc7mv" event={"ID":"2c1da30d-920b-4ea8-a627-10302870e933","Type":"ContainerDied","Data":"b2e9af2cd3699b54ff785f9b5cb0490e8fb2ab28159c0541a3d7a678df1169dd"} Jan 23 18:30:54 crc kubenswrapper[4760]: I0123 18:30:54.077664 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc7mv" event={"ID":"2c1da30d-920b-4ea8-a627-10302870e933","Type":"ContainerStarted","Data":"10583d4ff2bb8cfc940fe1699cf25cd83c2cec7cba30129ed2f3cec1497fbf82"} Jan 23 18:30:56 crc kubenswrapper[4760]: I0123 18:30:56.097442 4760 generic.go:334] "Generic (PLEG): container finished" podID="2c1da30d-920b-4ea8-a627-10302870e933" containerID="62a37e9311b14154dd09ff40a66af9905b45e94616030a11614a26389a6dc0fc" exitCode=0 Jan 23 18:30:56 crc kubenswrapper[4760]: I0123 18:30:56.097538 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc7mv" event={"ID":"2c1da30d-920b-4ea8-a627-10302870e933","Type":"ContainerDied","Data":"62a37e9311b14154dd09ff40a66af9905b45e94616030a11614a26389a6dc0fc"} Jan 23 18:30:57 crc kubenswrapper[4760]: I0123 18:30:57.115060 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc7mv" event={"ID":"2c1da30d-920b-4ea8-a627-10302870e933","Type":"ContainerStarted","Data":"c6081d4ee1b92672fec1b53539a4dd6f0784812359befb7724f97b3c8c2f46ba"} Jan 23 18:30:57 crc kubenswrapper[4760]: I0123 18:30:57.144168 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bc7mv" podStartSLOduration=2.720185994 podStartE2EDuration="5.144142496s" podCreationTimestamp="2026-01-23 18:30:52 +0000 UTC" firstStartedPulling="2026-01-23 18:30:54.082256557 +0000 UTC m=+1797.084714490" lastFinishedPulling="2026-01-23 18:30:56.506213029 +0000 UTC m=+1799.508670992" observedRunningTime="2026-01-23 18:30:57.136010968 +0000 UTC m=+1800.138468911" watchObservedRunningTime="2026-01-23 18:30:57.144142496 +0000 UTC m=+1800.146600459" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.630617 4760 scope.go:117] "RemoveContainer" containerID="3504667d3af85e677e928085a7b98cda90dac1cc12a9f5e3b62bce7bcdc4a934" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.651438 4760 scope.go:117] "RemoveContainer" containerID="27e365005a6f687d7a9afdc36c56eadcd863879e46fbe8b491feee933409f858" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.715329 4760 scope.go:117] "RemoveContainer" containerID="3d3354c0a0acd2389ac264294f0eb3debde91ebd3de5b95df816a042175e9b46" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.757319 4760 scope.go:117] "RemoveContainer" containerID="628b4fb4bed4e0e78ea053b47269a57566b2dd5519908e1601961a86c539f4ed" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.793126 4760 scope.go:117] "RemoveContainer" containerID="5189b22122c2e262c8ec4d58eacded9e1fcd049fdade6c8d8e5e8b736ce4036e" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.831292 4760 scope.go:117] "RemoveContainer" containerID="c82ffdeae8900d58bab4386f275b80a91e2ba500f436796987f510d6c00c82ac" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.867887 4760 scope.go:117] "RemoveContainer" containerID="22483d07667daf1630b4320b6e6646212bca246cab7402a73085a9c9bfa89458" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.889093 4760 scope.go:117] "RemoveContainer" containerID="02bcafddb1fee8e5a9e9710afeec10beef31a206954b5230e59c1ad6a5fc73c1" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.922071 4760 scope.go:117] "RemoveContainer" containerID="ad950e727b9dd98dd5ba4f71052d2e1962f9982efb276569be0facc11431ed68" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.939918 4760 scope.go:117] "RemoveContainer" containerID="f3925a5c1564d0af1dbfa21623a61d64a939f8412ed63961e42e98ac58d6c863" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.957025 4760 scope.go:117] "RemoveContainer" containerID="1e44164f06c433d58d0fea0aec722b5d3b0a04d9468d2f2ef8703fa7f363c462" Jan 23 18:30:59 crc kubenswrapper[4760]: I0123 18:30:59.981190 4760 scope.go:117] "RemoveContainer" containerID="06d9ab88b32495d2d1c9d836e86f5061754d422665555c45f5cb023fecd4b53e" Jan 23 18:31:00 crc kubenswrapper[4760]: I0123 18:31:00.005618 4760 scope.go:117] "RemoveContainer" containerID="955f544d3528018e131302f1ce66c1de5989e95de1da4c99a22966c1c952957f" Jan 23 18:31:00 crc kubenswrapper[4760]: I0123 18:31:00.024137 4760 scope.go:117] "RemoveContainer" containerID="b4253915f7c44fe9583f96296c8cac84b3d35a9a94dac37206815f713efc6d7c" Jan 23 18:31:00 crc kubenswrapper[4760]: I0123 18:31:00.048882 4760 scope.go:117] "RemoveContainer" containerID="36b9b90391a1a240524536e688e8d6e9195005bfd866fbc10a6d1b577f4558e1" Jan 23 18:31:02 crc kubenswrapper[4760]: I0123 18:31:02.595576 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:31:02 crc kubenswrapper[4760]: E0123 18:31:02.596703 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:31:03 crc kubenswrapper[4760]: I0123 18:31:03.266500 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:31:03 crc kubenswrapper[4760]: I0123 18:31:03.266549 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:31:03 crc kubenswrapper[4760]: I0123 18:31:03.329563 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:31:04 crc kubenswrapper[4760]: I0123 18:31:04.196560 4760 generic.go:334] "Generic (PLEG): container finished" podID="90442552-7906-4f81-917e-ea963be59436" containerID="aedff0aa8d09230ada9cce20ad2fef96a35053a19dbbb516a87a5fb2a1a5d067" exitCode=0 Jan 23 18:31:04 crc kubenswrapper[4760]: I0123 18:31:04.196661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" event={"ID":"90442552-7906-4f81-917e-ea963be59436","Type":"ContainerDied","Data":"aedff0aa8d09230ada9cce20ad2fef96a35053a19dbbb516a87a5fb2a1a5d067"} Jan 23 18:31:04 crc kubenswrapper[4760]: I0123 18:31:04.257204 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:31:04 crc kubenswrapper[4760]: I0123 18:31:04.309884 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bc7mv"] Jan 23 18:31:05 crc kubenswrapper[4760]: I0123 18:31:05.602137 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:31:05 crc kubenswrapper[4760]: I0123 18:31:05.770940 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-inventory\") pod \"90442552-7906-4f81-917e-ea963be59436\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " Jan 23 18:31:05 crc kubenswrapper[4760]: I0123 18:31:05.771099 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-ssh-key-openstack-edpm-ipam\") pod \"90442552-7906-4f81-917e-ea963be59436\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " Jan 23 18:31:05 crc kubenswrapper[4760]: I0123 18:31:05.771716 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b55jf\" (UniqueName: \"kubernetes.io/projected/90442552-7906-4f81-917e-ea963be59436-kube-api-access-b55jf\") pod \"90442552-7906-4f81-917e-ea963be59436\" (UID: \"90442552-7906-4f81-917e-ea963be59436\") " Jan 23 18:31:05 crc kubenswrapper[4760]: I0123 18:31:05.779682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90442552-7906-4f81-917e-ea963be59436-kube-api-access-b55jf" (OuterVolumeSpecName: "kube-api-access-b55jf") pod "90442552-7906-4f81-917e-ea963be59436" (UID: "90442552-7906-4f81-917e-ea963be59436"). InnerVolumeSpecName "kube-api-access-b55jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:05 crc kubenswrapper[4760]: I0123 18:31:05.796765 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90442552-7906-4f81-917e-ea963be59436" (UID: "90442552-7906-4f81-917e-ea963be59436"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:05 crc kubenswrapper[4760]: I0123 18:31:05.819271 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-inventory" (OuterVolumeSpecName: "inventory") pod "90442552-7906-4f81-917e-ea963be59436" (UID: "90442552-7906-4f81-917e-ea963be59436"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:05 crc kubenswrapper[4760]: I0123 18:31:05.874356 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:05 crc kubenswrapper[4760]: I0123 18:31:05.874386 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90442552-7906-4f81-917e-ea963be59436-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:05 crc kubenswrapper[4760]: I0123 18:31:05.874428 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b55jf\" (UniqueName: \"kubernetes.io/projected/90442552-7906-4f81-917e-ea963be59436-kube-api-access-b55jf\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.235581 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.235614 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2" event={"ID":"90442552-7906-4f81-917e-ea963be59436","Type":"ContainerDied","Data":"472fb3ba6dd58786652e519d1d3edcb401ef3e3e5e981c169a7efecfa2a77f81"} Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.235914 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="472fb3ba6dd58786652e519d1d3edcb401ef3e3e5e981c169a7efecfa2a77f81" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.236566 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bc7mv" podUID="2c1da30d-920b-4ea8-a627-10302870e933" containerName="registry-server" containerID="cri-o://c6081d4ee1b92672fec1b53539a4dd6f0784812359befb7724f97b3c8c2f46ba" gracePeriod=2 Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.313905 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45"] Jan 23 18:31:06 crc kubenswrapper[4760]: E0123 18:31:06.314373 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90442552-7906-4f81-917e-ea963be59436" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.314392 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="90442552-7906-4f81-917e-ea963be59436" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.314622 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="90442552-7906-4f81-917e-ea963be59436" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.315333 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.319933 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.320137 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.320276 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.320430 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.336270 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45"] Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.486097 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.486773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.487037 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwlj\" (UniqueName: \"kubernetes.io/projected/00f893b9-c65b-46a0-ab31-5e5a65b0774f-kube-api-access-zkwlj\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.588275 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkwlj\" (UniqueName: \"kubernetes.io/projected/00f893b9-c65b-46a0-ab31-5e5a65b0774f-kube-api-access-zkwlj\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.588359 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.588518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.594482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.601370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.606917 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkwlj\" (UniqueName: \"kubernetes.io/projected/00f893b9-c65b-46a0-ab31-5e5a65b0774f-kube-api-access-zkwlj\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:06 crc kubenswrapper[4760]: I0123 18:31:06.639146 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.243364 4760 generic.go:334] "Generic (PLEG): container finished" podID="2c1da30d-920b-4ea8-a627-10302870e933" containerID="c6081d4ee1b92672fec1b53539a4dd6f0784812359befb7724f97b3c8c2f46ba" exitCode=0 Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.243653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc7mv" event={"ID":"2c1da30d-920b-4ea8-a627-10302870e933","Type":"ContainerDied","Data":"c6081d4ee1b92672fec1b53539a4dd6f0784812359befb7724f97b3c8c2f46ba"} Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.547174 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.714892 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vrwb\" (UniqueName: \"kubernetes.io/projected/2c1da30d-920b-4ea8-a627-10302870e933-kube-api-access-9vrwb\") pod \"2c1da30d-920b-4ea8-a627-10302870e933\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.715818 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-utilities\") pod \"2c1da30d-920b-4ea8-a627-10302870e933\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.715948 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-catalog-content\") pod \"2c1da30d-920b-4ea8-a627-10302870e933\" (UID: \"2c1da30d-920b-4ea8-a627-10302870e933\") " Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.717926 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-utilities" (OuterVolumeSpecName: "utilities") pod "2c1da30d-920b-4ea8-a627-10302870e933" (UID: "2c1da30d-920b-4ea8-a627-10302870e933"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.721056 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1da30d-920b-4ea8-a627-10302870e933-kube-api-access-9vrwb" (OuterVolumeSpecName: "kube-api-access-9vrwb") pod "2c1da30d-920b-4ea8-a627-10302870e933" (UID: "2c1da30d-920b-4ea8-a627-10302870e933"). InnerVolumeSpecName "kube-api-access-9vrwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.742503 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45"] Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.783281 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c1da30d-920b-4ea8-a627-10302870e933" (UID: "2c1da30d-920b-4ea8-a627-10302870e933"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.818805 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.819054 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c1da30d-920b-4ea8-a627-10302870e933-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:07 crc kubenswrapper[4760]: I0123 18:31:07.819153 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vrwb\" (UniqueName: \"kubernetes.io/projected/2c1da30d-920b-4ea8-a627-10302870e933-kube-api-access-9vrwb\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:08 crc kubenswrapper[4760]: I0123 18:31:08.253217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" event={"ID":"00f893b9-c65b-46a0-ab31-5e5a65b0774f","Type":"ContainerStarted","Data":"2ca3d9c13bf43a5987ef9176ee9a45b205be2e89192bf9dfeb79c68ebf1828c0"} Jan 23 18:31:08 crc kubenswrapper[4760]: I0123 18:31:08.256471 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc7mv" event={"ID":"2c1da30d-920b-4ea8-a627-10302870e933","Type":"ContainerDied","Data":"10583d4ff2bb8cfc940fe1699cf25cd83c2cec7cba30129ed2f3cec1497fbf82"} Jan 23 18:31:08 crc kubenswrapper[4760]: I0123 18:31:08.256516 4760 scope.go:117] "RemoveContainer" containerID="c6081d4ee1b92672fec1b53539a4dd6f0784812359befb7724f97b3c8c2f46ba" Jan 23 18:31:08 crc kubenswrapper[4760]: I0123 18:31:08.256524 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bc7mv" Jan 23 18:31:08 crc kubenswrapper[4760]: I0123 18:31:08.361331 4760 scope.go:117] "RemoveContainer" containerID="62a37e9311b14154dd09ff40a66af9905b45e94616030a11614a26389a6dc0fc" Jan 23 18:31:08 crc kubenswrapper[4760]: I0123 18:31:08.387201 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bc7mv"] Jan 23 18:31:08 crc kubenswrapper[4760]: I0123 18:31:08.393596 4760 scope.go:117] "RemoveContainer" containerID="b2e9af2cd3699b54ff785f9b5cb0490e8fb2ab28159c0541a3d7a678df1169dd" Jan 23 18:31:08 crc kubenswrapper[4760]: I0123 18:31:08.395761 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bc7mv"] Jan 23 18:31:09 crc kubenswrapper[4760]: I0123 18:31:09.268183 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" event={"ID":"00f893b9-c65b-46a0-ab31-5e5a65b0774f","Type":"ContainerStarted","Data":"3b100cd872567eaabbe54de618b35106558f2bc2f5e2a685963b793f3a53cec4"} Jan 23 18:31:09 crc kubenswrapper[4760]: I0123 18:31:09.285390 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" podStartSLOduration=2.798021517 podStartE2EDuration="3.28536957s" podCreationTimestamp="2026-01-23 18:31:06 +0000 UTC" firstStartedPulling="2026-01-23 18:31:07.733923392 +0000 UTC m=+1810.736381325" lastFinishedPulling="2026-01-23 18:31:08.221271445 +0000 UTC m=+1811.223729378" observedRunningTime="2026-01-23 18:31:09.281848466 +0000 UTC m=+1812.284306419" watchObservedRunningTime="2026-01-23 18:31:09.28536957 +0000 UTC m=+1812.287827513" Jan 23 18:31:09 crc kubenswrapper[4760]: I0123 18:31:09.605337 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c1da30d-920b-4ea8-a627-10302870e933" path="/var/lib/kubelet/pods/2c1da30d-920b-4ea8-a627-10302870e933/volumes" Jan 23 18:31:12 crc kubenswrapper[4760]: I0123 18:31:12.297540 4760 generic.go:334] "Generic (PLEG): container finished" podID="00f893b9-c65b-46a0-ab31-5e5a65b0774f" containerID="3b100cd872567eaabbe54de618b35106558f2bc2f5e2a685963b793f3a53cec4" exitCode=0 Jan 23 18:31:12 crc kubenswrapper[4760]: I0123 18:31:12.297622 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" event={"ID":"00f893b9-c65b-46a0-ab31-5e5a65b0774f","Type":"ContainerDied","Data":"3b100cd872567eaabbe54de618b35106558f2bc2f5e2a685963b793f3a53cec4"} Jan 23 18:31:13 crc kubenswrapper[4760]: I0123 18:31:13.700885 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:13 crc kubenswrapper[4760]: I0123 18:31:13.788369 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-ssh-key-openstack-edpm-ipam\") pod \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " Jan 23 18:31:13 crc kubenswrapper[4760]: I0123 18:31:13.820358 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "00f893b9-c65b-46a0-ab31-5e5a65b0774f" (UID: "00f893b9-c65b-46a0-ab31-5e5a65b0774f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:13 crc kubenswrapper[4760]: I0123 18:31:13.890801 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-inventory\") pod \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " Jan 23 18:31:13 crc kubenswrapper[4760]: I0123 18:31:13.890854 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkwlj\" (UniqueName: \"kubernetes.io/projected/00f893b9-c65b-46a0-ab31-5e5a65b0774f-kube-api-access-zkwlj\") pod \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\" (UID: \"00f893b9-c65b-46a0-ab31-5e5a65b0774f\") " Jan 23 18:31:13 crc kubenswrapper[4760]: I0123 18:31:13.891258 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:13 crc kubenswrapper[4760]: I0123 18:31:13.894140 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f893b9-c65b-46a0-ab31-5e5a65b0774f-kube-api-access-zkwlj" (OuterVolumeSpecName: "kube-api-access-zkwlj") pod "00f893b9-c65b-46a0-ab31-5e5a65b0774f" (UID: "00f893b9-c65b-46a0-ab31-5e5a65b0774f"). InnerVolumeSpecName "kube-api-access-zkwlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:13 crc kubenswrapper[4760]: I0123 18:31:13.913987 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-inventory" (OuterVolumeSpecName: "inventory") pod "00f893b9-c65b-46a0-ab31-5e5a65b0774f" (UID: "00f893b9-c65b-46a0-ab31-5e5a65b0774f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:13 crc kubenswrapper[4760]: I0123 18:31:13.993280 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00f893b9-c65b-46a0-ab31-5e5a65b0774f-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:13 crc kubenswrapper[4760]: I0123 18:31:13.993640 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkwlj\" (UniqueName: \"kubernetes.io/projected/00f893b9-c65b-46a0-ab31-5e5a65b0774f-kube-api-access-zkwlj\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.314744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" event={"ID":"00f893b9-c65b-46a0-ab31-5e5a65b0774f","Type":"ContainerDied","Data":"2ca3d9c13bf43a5987ef9176ee9a45b205be2e89192bf9dfeb79c68ebf1828c0"} Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.314789 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ca3d9c13bf43a5987ef9176ee9a45b205be2e89192bf9dfeb79c68ebf1828c0" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.314867 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.405665 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c"] Jan 23 18:31:14 crc kubenswrapper[4760]: E0123 18:31:14.406038 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f893b9-c65b-46a0-ab31-5e5a65b0774f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.406055 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f893b9-c65b-46a0-ab31-5e5a65b0774f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 18:31:14 crc kubenswrapper[4760]: E0123 18:31:14.406079 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1da30d-920b-4ea8-a627-10302870e933" containerName="extract-content" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.406087 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1da30d-920b-4ea8-a627-10302870e933" containerName="extract-content" Jan 23 18:31:14 crc kubenswrapper[4760]: E0123 18:31:14.406095 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1da30d-920b-4ea8-a627-10302870e933" containerName="registry-server" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.406101 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1da30d-920b-4ea8-a627-10302870e933" containerName="registry-server" Jan 23 18:31:14 crc kubenswrapper[4760]: E0123 18:31:14.406114 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1da30d-920b-4ea8-a627-10302870e933" containerName="extract-utilities" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.406120 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1da30d-920b-4ea8-a627-10302870e933" containerName="extract-utilities" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.406278 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1da30d-920b-4ea8-a627-10302870e933" containerName="registry-server" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.406303 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f893b9-c65b-46a0-ab31-5e5a65b0774f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.406914 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.410359 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.410687 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.412566 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.413505 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.428686 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c"] Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.501475 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npp4c\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.501642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcx4c\" (UniqueName: \"kubernetes.io/projected/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-kube-api-access-fcx4c\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npp4c\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.501832 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npp4c\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.603332 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npp4c\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.603460 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npp4c\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.603545 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcx4c\" (UniqueName: \"kubernetes.io/projected/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-kube-api-access-fcx4c\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npp4c\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.607793 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npp4c\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.608790 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npp4c\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.620166 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcx4c\" (UniqueName: \"kubernetes.io/projected/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-kube-api-access-fcx4c\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npp4c\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:14 crc kubenswrapper[4760]: I0123 18:31:14.723069 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:31:15 crc kubenswrapper[4760]: I0123 18:31:15.195765 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c"] Jan 23 18:31:15 crc kubenswrapper[4760]: I0123 18:31:15.323188 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" event={"ID":"299a7a0b-bd14-4c8d-98b5-41c51529c0f1","Type":"ContainerStarted","Data":"90eca7098e501f193bd910573d43ebbe19f1c8a18f1ea6bfef9d5a4ff5939d6b"} Jan 23 18:31:15 crc kubenswrapper[4760]: I0123 18:31:15.595855 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:31:15 crc kubenswrapper[4760]: E0123 18:31:15.596472 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:31:16 crc kubenswrapper[4760]: I0123 18:31:16.333160 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" event={"ID":"299a7a0b-bd14-4c8d-98b5-41c51529c0f1","Type":"ContainerStarted","Data":"758560e66135aa879f1d454ca7b06833170b48ca2daa7bff82c343a79c38134b"} Jan 23 18:31:16 crc kubenswrapper[4760]: I0123 18:31:16.350399 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" podStartSLOduration=1.9339622520000002 podStartE2EDuration="2.350381209s" podCreationTimestamp="2026-01-23 18:31:14 +0000 UTC" firstStartedPulling="2026-01-23 18:31:15.203560394 +0000 UTC m=+1818.206018327" lastFinishedPulling="2026-01-23 18:31:15.619979351 +0000 UTC m=+1818.622437284" observedRunningTime="2026-01-23 18:31:16.34857186 +0000 UTC m=+1819.351029793" watchObservedRunningTime="2026-01-23 18:31:16.350381209 +0000 UTC m=+1819.352839132" Jan 23 18:31:28 crc kubenswrapper[4760]: I0123 18:31:28.595755 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:31:28 crc kubenswrapper[4760]: E0123 18:31:28.596531 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:31:29 crc kubenswrapper[4760]: I0123 18:31:29.043207 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-wmcdm"] Jan 23 18:31:29 crc kubenswrapper[4760]: I0123 18:31:29.052723 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-wmcdm"] Jan 23 18:31:29 crc kubenswrapper[4760]: I0123 18:31:29.605053 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="547f1a6e-8dd3-4ef8-928a-73747c6576d6" path="/var/lib/kubelet/pods/547f1a6e-8dd3-4ef8-928a-73747c6576d6/volumes" Jan 23 18:31:32 crc kubenswrapper[4760]: I0123 18:31:32.029104 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-j22ps"] Jan 23 18:31:32 crc kubenswrapper[4760]: I0123 18:31:32.038624 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-j22ps"] Jan 23 18:31:33 crc kubenswrapper[4760]: I0123 18:31:33.032458 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qmvlz"] Jan 23 18:31:33 crc kubenswrapper[4760]: I0123 18:31:33.033959 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qmvlz"] Jan 23 18:31:33 crc kubenswrapper[4760]: I0123 18:31:33.609741 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35fc4c6e-7c68-44cb-bd4e-fc34214ed151" path="/var/lib/kubelet/pods/35fc4c6e-7c68-44cb-bd4e-fc34214ed151/volumes" Jan 23 18:31:33 crc kubenswrapper[4760]: I0123 18:31:33.610908 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7609f243-769b-47f3-bc58-9f58e68a00a2" path="/var/lib/kubelet/pods/7609f243-769b-47f3-bc58-9f58e68a00a2/volumes" Jan 23 18:31:43 crc kubenswrapper[4760]: I0123 18:31:43.595432 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:31:43 crc kubenswrapper[4760]: E0123 18:31:43.596488 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:31:47 crc kubenswrapper[4760]: I0123 18:31:47.038623 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-8bqhf"] Jan 23 18:31:47 crc kubenswrapper[4760]: I0123 18:31:47.048937 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-8bqhf"] Jan 23 18:31:47 crc kubenswrapper[4760]: I0123 18:31:47.606998 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d74dde90-69ec-49ed-9531-80aaea5a691e" path="/var/lib/kubelet/pods/d74dde90-69ec-49ed-9531-80aaea5a691e/volumes" Jan 23 18:31:49 crc kubenswrapper[4760]: I0123 18:31:49.034874 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-w8mlk"] Jan 23 18:31:49 crc kubenswrapper[4760]: I0123 18:31:49.044260 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-w8mlk"] Jan 23 18:31:49 crc kubenswrapper[4760]: I0123 18:31:49.611183 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1f4885-e30d-4dd2-a80c-8960404fc972" path="/var/lib/kubelet/pods/de1f4885-e30d-4dd2-a80c-8960404fc972/volumes" Jan 23 18:31:54 crc kubenswrapper[4760]: I0123 18:31:54.594599 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:31:54 crc kubenswrapper[4760]: E0123 18:31:54.595370 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:32:00 crc kubenswrapper[4760]: I0123 18:32:00.288639 4760 scope.go:117] "RemoveContainer" containerID="913909bcba966ba4bba8bca7ce252ed7a85b5c71b9558b99c3572f953124c1a3" Jan 23 18:32:00 crc kubenswrapper[4760]: I0123 18:32:00.336197 4760 scope.go:117] "RemoveContainer" containerID="ba7a508f8719421e30a11d12cda12b9fbd7056e28ec7a884e287669cf5671648" Jan 23 18:32:00 crc kubenswrapper[4760]: I0123 18:32:00.376995 4760 scope.go:117] "RemoveContainer" containerID="90f6ce7a56983cf84bb4503964960e913bb4692e4763b92ef7a5038a5024e54f" Jan 23 18:32:00 crc kubenswrapper[4760]: I0123 18:32:00.407873 4760 scope.go:117] "RemoveContainer" containerID="bdedda89a25fb36ba1a9b33449505263fb6cf9b8fc001497d8b05d601b65e855" Jan 23 18:32:00 crc kubenswrapper[4760]: I0123 18:32:00.454054 4760 scope.go:117] "RemoveContainer" containerID="a164895318fc686ad18b97be09c3d6db887de7c03b90f97782e85ed8f19d1efc" Jan 23 18:32:05 crc kubenswrapper[4760]: I0123 18:32:05.733926 4760 generic.go:334] "Generic (PLEG): container finished" podID="299a7a0b-bd14-4c8d-98b5-41c51529c0f1" containerID="758560e66135aa879f1d454ca7b06833170b48ca2daa7bff82c343a79c38134b" exitCode=0 Jan 23 18:32:05 crc kubenswrapper[4760]: I0123 18:32:05.734011 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" event={"ID":"299a7a0b-bd14-4c8d-98b5-41c51529c0f1","Type":"ContainerDied","Data":"758560e66135aa879f1d454ca7b06833170b48ca2daa7bff82c343a79c38134b"} Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.169913 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.306664 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-ssh-key-openstack-edpm-ipam\") pod \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.306782 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcx4c\" (UniqueName: \"kubernetes.io/projected/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-kube-api-access-fcx4c\") pod \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.306813 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-inventory\") pod \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\" (UID: \"299a7a0b-bd14-4c8d-98b5-41c51529c0f1\") " Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.312827 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-kube-api-access-fcx4c" (OuterVolumeSpecName: "kube-api-access-fcx4c") pod "299a7a0b-bd14-4c8d-98b5-41c51529c0f1" (UID: "299a7a0b-bd14-4c8d-98b5-41c51529c0f1"). InnerVolumeSpecName "kube-api-access-fcx4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.335322 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-inventory" (OuterVolumeSpecName: "inventory") pod "299a7a0b-bd14-4c8d-98b5-41c51529c0f1" (UID: "299a7a0b-bd14-4c8d-98b5-41c51529c0f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.340029 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "299a7a0b-bd14-4c8d-98b5-41c51529c0f1" (UID: "299a7a0b-bd14-4c8d-98b5-41c51529c0f1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.409153 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcx4c\" (UniqueName: \"kubernetes.io/projected/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-kube-api-access-fcx4c\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.409187 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.409198 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/299a7a0b-bd14-4c8d-98b5-41c51529c0f1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.753687 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" event={"ID":"299a7a0b-bd14-4c8d-98b5-41c51529c0f1","Type":"ContainerDied","Data":"90eca7098e501f193bd910573d43ebbe19f1c8a18f1ea6bfef9d5a4ff5939d6b"} Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.753734 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90eca7098e501f193bd910573d43ebbe19f1c8a18f1ea6bfef9d5a4ff5939d6b" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.753735 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.831729 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nl6jr"] Jan 23 18:32:07 crc kubenswrapper[4760]: E0123 18:32:07.832194 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="299a7a0b-bd14-4c8d-98b5-41c51529c0f1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.832221 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="299a7a0b-bd14-4c8d-98b5-41c51529c0f1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.832456 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="299a7a0b-bd14-4c8d-98b5-41c51529c0f1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.833173 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.835699 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.836383 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.836644 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.836956 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.843253 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nl6jr"] Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.917200 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9wm7\" (UniqueName: \"kubernetes.io/projected/575b672f-b4d2-4108-9136-4522c76bab27-kube-api-access-w9wm7\") pod \"ssh-known-hosts-edpm-deployment-nl6jr\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.917350 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nl6jr\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:07 crc kubenswrapper[4760]: I0123 18:32:07.917373 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nl6jr\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:08 crc kubenswrapper[4760]: I0123 18:32:08.019479 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9wm7\" (UniqueName: \"kubernetes.io/projected/575b672f-b4d2-4108-9136-4522c76bab27-kube-api-access-w9wm7\") pod \"ssh-known-hosts-edpm-deployment-nl6jr\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:08 crc kubenswrapper[4760]: I0123 18:32:08.019629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nl6jr\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:08 crc kubenswrapper[4760]: I0123 18:32:08.019659 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nl6jr\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:08 crc kubenswrapper[4760]: I0123 18:32:08.033392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nl6jr\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:08 crc kubenswrapper[4760]: I0123 18:32:08.034125 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nl6jr\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:08 crc kubenswrapper[4760]: I0123 18:32:08.037541 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9wm7\" (UniqueName: \"kubernetes.io/projected/575b672f-b4d2-4108-9136-4522c76bab27-kube-api-access-w9wm7\") pod \"ssh-known-hosts-edpm-deployment-nl6jr\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:08 crc kubenswrapper[4760]: I0123 18:32:08.150568 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:08 crc kubenswrapper[4760]: I0123 18:32:08.595660 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:32:08 crc kubenswrapper[4760]: E0123 18:32:08.596300 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:32:08 crc kubenswrapper[4760]: I0123 18:32:08.643802 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nl6jr"] Jan 23 18:32:08 crc kubenswrapper[4760]: I0123 18:32:08.762732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" event={"ID":"575b672f-b4d2-4108-9136-4522c76bab27","Type":"ContainerStarted","Data":"4177a24affd2c945d2ddb1c5227875e0eacb1859f02ffa20114e55277e4f2fed"} Jan 23 18:32:09 crc kubenswrapper[4760]: I0123 18:32:09.770798 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" event={"ID":"575b672f-b4d2-4108-9136-4522c76bab27","Type":"ContainerStarted","Data":"43e9617e99b3444edd0d6893a4859d8c7ab78d1fcf7c573c0f21db753e78e383"} Jan 23 18:32:09 crc kubenswrapper[4760]: I0123 18:32:09.793079 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" podStartSLOduration=2.327509023 podStartE2EDuration="2.793058973s" podCreationTimestamp="2026-01-23 18:32:07 +0000 UTC" firstStartedPulling="2026-01-23 18:32:08.648351335 +0000 UTC m=+1871.650809268" lastFinishedPulling="2026-01-23 18:32:09.113901285 +0000 UTC m=+1872.116359218" observedRunningTime="2026-01-23 18:32:09.787584217 +0000 UTC m=+1872.790042150" watchObservedRunningTime="2026-01-23 18:32:09.793058973 +0000 UTC m=+1872.795516906" Jan 23 18:32:16 crc kubenswrapper[4760]: I0123 18:32:16.832611 4760 generic.go:334] "Generic (PLEG): container finished" podID="575b672f-b4d2-4108-9136-4522c76bab27" containerID="43e9617e99b3444edd0d6893a4859d8c7ab78d1fcf7c573c0f21db753e78e383" exitCode=0 Jan 23 18:32:16 crc kubenswrapper[4760]: I0123 18:32:16.833532 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" event={"ID":"575b672f-b4d2-4108-9136-4522c76bab27","Type":"ContainerDied","Data":"43e9617e99b3444edd0d6893a4859d8c7ab78d1fcf7c573c0f21db753e78e383"} Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.224839 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.307226 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9wm7\" (UniqueName: \"kubernetes.io/projected/575b672f-b4d2-4108-9136-4522c76bab27-kube-api-access-w9wm7\") pod \"575b672f-b4d2-4108-9136-4522c76bab27\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.307500 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-ssh-key-openstack-edpm-ipam\") pod \"575b672f-b4d2-4108-9136-4522c76bab27\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.307531 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-inventory-0\") pod \"575b672f-b4d2-4108-9136-4522c76bab27\" (UID: \"575b672f-b4d2-4108-9136-4522c76bab27\") " Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.314306 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575b672f-b4d2-4108-9136-4522c76bab27-kube-api-access-w9wm7" (OuterVolumeSpecName: "kube-api-access-w9wm7") pod "575b672f-b4d2-4108-9136-4522c76bab27" (UID: "575b672f-b4d2-4108-9136-4522c76bab27"). InnerVolumeSpecName "kube-api-access-w9wm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.336800 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "575b672f-b4d2-4108-9136-4522c76bab27" (UID: "575b672f-b4d2-4108-9136-4522c76bab27"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.337181 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "575b672f-b4d2-4108-9136-4522c76bab27" (UID: "575b672f-b4d2-4108-9136-4522c76bab27"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.409318 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.409365 4760 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/575b672f-b4d2-4108-9136-4522c76bab27-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.409376 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9wm7\" (UniqueName: \"kubernetes.io/projected/575b672f-b4d2-4108-9136-4522c76bab27-kube-api-access-w9wm7\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.852192 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" event={"ID":"575b672f-b4d2-4108-9136-4522c76bab27","Type":"ContainerDied","Data":"4177a24affd2c945d2ddb1c5227875e0eacb1859f02ffa20114e55277e4f2fed"} Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.852239 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4177a24affd2c945d2ddb1c5227875e0eacb1859f02ffa20114e55277e4f2fed" Jan 23 18:32:18 crc kubenswrapper[4760]: I0123 18:32:18.852269 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nl6jr" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.282013 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl"] Jan 23 18:32:19 crc kubenswrapper[4760]: E0123 18:32:19.282374 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575b672f-b4d2-4108-9136-4522c76bab27" containerName="ssh-known-hosts-edpm-deployment" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.282387 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="575b672f-b4d2-4108-9136-4522c76bab27" containerName="ssh-known-hosts-edpm-deployment" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.282603 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="575b672f-b4d2-4108-9136-4522c76bab27" containerName="ssh-known-hosts-edpm-deployment" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.283174 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.289854 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.289938 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.289938 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.290103 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.309540 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl"] Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.427091 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t4nwl\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.427257 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fll2j\" (UniqueName: \"kubernetes.io/projected/413094cd-6ef7-4c21-b21a-c3a96d905065-kube-api-access-fll2j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t4nwl\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.427299 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t4nwl\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.529651 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fll2j\" (UniqueName: \"kubernetes.io/projected/413094cd-6ef7-4c21-b21a-c3a96d905065-kube-api-access-fll2j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t4nwl\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.529834 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t4nwl\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.530214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t4nwl\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.538258 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t4nwl\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.541034 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t4nwl\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.555137 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fll2j\" (UniqueName: \"kubernetes.io/projected/413094cd-6ef7-4c21-b21a-c3a96d905065-kube-api-access-fll2j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t4nwl\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:19 crc kubenswrapper[4760]: I0123 18:32:19.610916 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:20 crc kubenswrapper[4760]: I0123 18:32:20.176494 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl"] Jan 23 18:32:20 crc kubenswrapper[4760]: I0123 18:32:20.595641 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:32:20 crc kubenswrapper[4760]: E0123 18:32:20.595904 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:32:20 crc kubenswrapper[4760]: I0123 18:32:20.873535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" event={"ID":"413094cd-6ef7-4c21-b21a-c3a96d905065","Type":"ContainerStarted","Data":"45a02cd058c05310384b8fee43fd0cbe22ae6d482462801dac42306960a5fc82"} Jan 23 18:32:20 crc kubenswrapper[4760]: I0123 18:32:20.873852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" event={"ID":"413094cd-6ef7-4c21-b21a-c3a96d905065","Type":"ContainerStarted","Data":"a28155722ab62c4967bab03ef83b7510c8134667055f77625bf5abe13bdc1408"} Jan 23 18:32:20 crc kubenswrapper[4760]: I0123 18:32:20.892970 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" podStartSLOduration=1.4770479380000001 podStartE2EDuration="1.892947084s" podCreationTimestamp="2026-01-23 18:32:19 +0000 UTC" firstStartedPulling="2026-01-23 18:32:20.177555051 +0000 UTC m=+1883.180012984" lastFinishedPulling="2026-01-23 18:32:20.593454197 +0000 UTC m=+1883.595912130" observedRunningTime="2026-01-23 18:32:20.890385684 +0000 UTC m=+1883.892843617" watchObservedRunningTime="2026-01-23 18:32:20.892947084 +0000 UTC m=+1883.895405037" Jan 23 18:32:21 crc kubenswrapper[4760]: I0123 18:32:21.041281 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9fa7-account-create-update-2x9wm"] Jan 23 18:32:21 crc kubenswrapper[4760]: I0123 18:32:21.050194 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9fa7-account-create-update-2x9wm"] Jan 23 18:32:21 crc kubenswrapper[4760]: I0123 18:32:21.606759 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d67914fd-9d8f-4d74-8bf3-51550c292f95" path="/var/lib/kubelet/pods/d67914fd-9d8f-4d74-8bf3-51550c292f95/volumes" Jan 23 18:32:22 crc kubenswrapper[4760]: I0123 18:32:22.036601 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-c2q94"] Jan 23 18:32:22 crc kubenswrapper[4760]: I0123 18:32:22.044451 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xw828"] Jan 23 18:32:22 crc kubenswrapper[4760]: I0123 18:32:22.055075 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-414c-account-create-update-c6x2x"] Jan 23 18:32:22 crc kubenswrapper[4760]: I0123 18:32:22.064611 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-c2q94"] Jan 23 18:32:22 crc kubenswrapper[4760]: I0123 18:32:22.072495 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-414c-account-create-update-c6x2x"] Jan 23 18:32:22 crc kubenswrapper[4760]: I0123 18:32:22.079025 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xw828"] Jan 23 18:32:22 crc kubenswrapper[4760]: I0123 18:32:22.085938 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7xpz8"] Jan 23 18:32:22 crc kubenswrapper[4760]: I0123 18:32:22.093468 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-90e1-account-create-update-q66bq"] Jan 23 18:32:22 crc kubenswrapper[4760]: I0123 18:32:22.100635 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-90e1-account-create-update-q66bq"] Jan 23 18:32:22 crc kubenswrapper[4760]: I0123 18:32:22.108169 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7xpz8"] Jan 23 18:32:23 crc kubenswrapper[4760]: I0123 18:32:23.606823 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24798eb5-c0de-4eec-a09e-d3bb7409e529" path="/var/lib/kubelet/pods/24798eb5-c0de-4eec-a09e-d3bb7409e529/volumes" Jan 23 18:32:23 crc kubenswrapper[4760]: I0123 18:32:23.607471 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="479e422c-5c59-4786-9b8e-e237f521fdaf" path="/var/lib/kubelet/pods/479e422c-5c59-4786-9b8e-e237f521fdaf/volumes" Jan 23 18:32:23 crc kubenswrapper[4760]: I0123 18:32:23.607953 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4996067c-6b4f-4cbf-a418-a77889a7a676" path="/var/lib/kubelet/pods/4996067c-6b4f-4cbf-a418-a77889a7a676/volumes" Jan 23 18:32:23 crc kubenswrapper[4760]: I0123 18:32:23.608491 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1169238-c8d2-41e2-889e-54f12b6e2b97" path="/var/lib/kubelet/pods/b1169238-c8d2-41e2-889e-54f12b6e2b97/volumes" Jan 23 18:32:23 crc kubenswrapper[4760]: I0123 18:32:23.609616 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d842cdd0-8594-4655-9b54-81dfb7855f67" path="/var/lib/kubelet/pods/d842cdd0-8594-4655-9b54-81dfb7855f67/volumes" Jan 23 18:32:28 crc kubenswrapper[4760]: I0123 18:32:28.943758 4760 generic.go:334] "Generic (PLEG): container finished" podID="413094cd-6ef7-4c21-b21a-c3a96d905065" containerID="45a02cd058c05310384b8fee43fd0cbe22ae6d482462801dac42306960a5fc82" exitCode=0 Jan 23 18:32:28 crc kubenswrapper[4760]: I0123 18:32:28.943857 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" event={"ID":"413094cd-6ef7-4c21-b21a-c3a96d905065","Type":"ContainerDied","Data":"45a02cd058c05310384b8fee43fd0cbe22ae6d482462801dac42306960a5fc82"} Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.443608 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.524582 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-ssh-key-openstack-edpm-ipam\") pod \"413094cd-6ef7-4c21-b21a-c3a96d905065\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.524781 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fll2j\" (UniqueName: \"kubernetes.io/projected/413094cd-6ef7-4c21-b21a-c3a96d905065-kube-api-access-fll2j\") pod \"413094cd-6ef7-4c21-b21a-c3a96d905065\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.524821 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-inventory\") pod \"413094cd-6ef7-4c21-b21a-c3a96d905065\" (UID: \"413094cd-6ef7-4c21-b21a-c3a96d905065\") " Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.539652 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/413094cd-6ef7-4c21-b21a-c3a96d905065-kube-api-access-fll2j" (OuterVolumeSpecName: "kube-api-access-fll2j") pod "413094cd-6ef7-4c21-b21a-c3a96d905065" (UID: "413094cd-6ef7-4c21-b21a-c3a96d905065"). InnerVolumeSpecName "kube-api-access-fll2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.553368 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "413094cd-6ef7-4c21-b21a-c3a96d905065" (UID: "413094cd-6ef7-4c21-b21a-c3a96d905065"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.553778 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-inventory" (OuterVolumeSpecName: "inventory") pod "413094cd-6ef7-4c21-b21a-c3a96d905065" (UID: "413094cd-6ef7-4c21-b21a-c3a96d905065"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.626997 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fll2j\" (UniqueName: \"kubernetes.io/projected/413094cd-6ef7-4c21-b21a-c3a96d905065-kube-api-access-fll2j\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.627025 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.627034 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/413094cd-6ef7-4c21-b21a-c3a96d905065-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.968787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" event={"ID":"413094cd-6ef7-4c21-b21a-c3a96d905065","Type":"ContainerDied","Data":"a28155722ab62c4967bab03ef83b7510c8134667055f77625bf5abe13bdc1408"} Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.968827 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a28155722ab62c4967bab03ef83b7510c8134667055f77625bf5abe13bdc1408" Jan 23 18:32:30 crc kubenswrapper[4760]: I0123 18:32:30.968887 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.036528 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j"] Jan 23 18:32:31 crc kubenswrapper[4760]: E0123 18:32:31.036954 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413094cd-6ef7-4c21-b21a-c3a96d905065" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.036975 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="413094cd-6ef7-4c21-b21a-c3a96d905065" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.037131 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="413094cd-6ef7-4c21-b21a-c3a96d905065" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.037829 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.043086 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.043319 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.043526 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.043596 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.055523 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j"] Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.136694 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.136764 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sfkz\" (UniqueName: \"kubernetes.io/projected/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-kube-api-access-9sfkz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.136789 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.238341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.238443 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sfkz\" (UniqueName: \"kubernetes.io/projected/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-kube-api-access-9sfkz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.238493 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.243930 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.244013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.260752 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sfkz\" (UniqueName: \"kubernetes.io/projected/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-kube-api-access-9sfkz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.355157 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.892175 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j"] Jan 23 18:32:31 crc kubenswrapper[4760]: I0123 18:32:31.976211 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" event={"ID":"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47","Type":"ContainerStarted","Data":"5c8d7e87c6ee79d5107a118356d2c4d8b1cbeb399be3fe23b32eea2a4b08483c"} Jan 23 18:32:32 crc kubenswrapper[4760]: I0123 18:32:32.987724 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" event={"ID":"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47","Type":"ContainerStarted","Data":"9c4531681b8e8ed76441de459cc8be2ae4dd581d526760314fa73acca56409a0"} Jan 23 18:32:33 crc kubenswrapper[4760]: I0123 18:32:33.028797 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" podStartSLOduration=1.567492691 podStartE2EDuration="2.028780795s" podCreationTimestamp="2026-01-23 18:32:31 +0000 UTC" firstStartedPulling="2026-01-23 18:32:31.902943365 +0000 UTC m=+1894.905401298" lastFinishedPulling="2026-01-23 18:32:32.364231469 +0000 UTC m=+1895.366689402" observedRunningTime="2026-01-23 18:32:33.019338936 +0000 UTC m=+1896.021796889" watchObservedRunningTime="2026-01-23 18:32:33.028780795 +0000 UTC m=+1896.031238728" Jan 23 18:32:35 crc kubenswrapper[4760]: I0123 18:32:35.595999 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:32:35 crc kubenswrapper[4760]: E0123 18:32:35.596508 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:32:42 crc kubenswrapper[4760]: I0123 18:32:42.057827 4760 generic.go:334] "Generic (PLEG): container finished" podID="f55ed1f9-ccbe-4968-a629-e0cfbc64ac47" containerID="9c4531681b8e8ed76441de459cc8be2ae4dd581d526760314fa73acca56409a0" exitCode=0 Jan 23 18:32:42 crc kubenswrapper[4760]: I0123 18:32:42.057921 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" event={"ID":"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47","Type":"ContainerDied","Data":"9c4531681b8e8ed76441de459cc8be2ae4dd581d526760314fa73acca56409a0"} Jan 23 18:32:43 crc kubenswrapper[4760]: I0123 18:32:43.435588 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:43 crc kubenswrapper[4760]: I0123 18:32:43.564494 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sfkz\" (UniqueName: \"kubernetes.io/projected/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-kube-api-access-9sfkz\") pod \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " Jan 23 18:32:43 crc kubenswrapper[4760]: I0123 18:32:43.564739 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-inventory\") pod \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " Jan 23 18:32:43 crc kubenswrapper[4760]: I0123 18:32:43.564806 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-ssh-key-openstack-edpm-ipam\") pod \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\" (UID: \"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47\") " Jan 23 18:32:43 crc kubenswrapper[4760]: I0123 18:32:43.570286 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-kube-api-access-9sfkz" (OuterVolumeSpecName: "kube-api-access-9sfkz") pod "f55ed1f9-ccbe-4968-a629-e0cfbc64ac47" (UID: "f55ed1f9-ccbe-4968-a629-e0cfbc64ac47"). InnerVolumeSpecName "kube-api-access-9sfkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:32:43 crc kubenswrapper[4760]: I0123 18:32:43.593582 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f55ed1f9-ccbe-4968-a629-e0cfbc64ac47" (UID: "f55ed1f9-ccbe-4968-a629-e0cfbc64ac47"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:32:43 crc kubenswrapper[4760]: I0123 18:32:43.600134 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-inventory" (OuterVolumeSpecName: "inventory") pod "f55ed1f9-ccbe-4968-a629-e0cfbc64ac47" (UID: "f55ed1f9-ccbe-4968-a629-e0cfbc64ac47"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:32:43 crc kubenswrapper[4760]: I0123 18:32:43.667272 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:43 crc kubenswrapper[4760]: I0123 18:32:43.667307 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:43 crc kubenswrapper[4760]: I0123 18:32:43.667318 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sfkz\" (UniqueName: \"kubernetes.io/projected/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47-kube-api-access-9sfkz\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:44 crc kubenswrapper[4760]: I0123 18:32:44.075030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" event={"ID":"f55ed1f9-ccbe-4968-a629-e0cfbc64ac47","Type":"ContainerDied","Data":"5c8d7e87c6ee79d5107a118356d2c4d8b1cbeb399be3fe23b32eea2a4b08483c"} Jan 23 18:32:44 crc kubenswrapper[4760]: I0123 18:32:44.075076 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c8d7e87c6ee79d5107a118356d2c4d8b1cbeb399be3fe23b32eea2a4b08483c" Jan 23 18:32:44 crc kubenswrapper[4760]: I0123 18:32:44.075384 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j" Jan 23 18:32:47 crc kubenswrapper[4760]: I0123 18:32:47.610856 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:32:48 crc kubenswrapper[4760]: I0123 18:32:48.113513 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"3897c460dff3c1dd76b4a0f32540dbd4327e1c8431b496e56315435a5349f64c"} Jan 23 18:32:52 crc kubenswrapper[4760]: I0123 18:32:52.052430 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qxn5j"] Jan 23 18:32:52 crc kubenswrapper[4760]: I0123 18:32:52.058690 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qxn5j"] Jan 23 18:32:53 crc kubenswrapper[4760]: I0123 18:32:53.607105 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e" path="/var/lib/kubelet/pods/4e3ece7e-d0b3-4e4c-816a-1cd699bf8a4e/volumes" Jan 23 18:33:00 crc kubenswrapper[4760]: I0123 18:33:00.631743 4760 scope.go:117] "RemoveContainer" containerID="904faf4b83a2d4b858b54d20ea9b8d9d54ce1b23c25918373e8e40a9ff50b0e4" Jan 23 18:33:00 crc kubenswrapper[4760]: I0123 18:33:00.662729 4760 scope.go:117] "RemoveContainer" containerID="9c6a2c32084165c02fee60a9a34d8423c526a0a9746ea178c3266a55ba67677f" Jan 23 18:33:00 crc kubenswrapper[4760]: I0123 18:33:00.726290 4760 scope.go:117] "RemoveContainer" containerID="6d08accc95c1a2e943c9d213946233de530d9ded545e980d191fe17b1118985a" Jan 23 18:33:00 crc kubenswrapper[4760]: I0123 18:33:00.762834 4760 scope.go:117] "RemoveContainer" containerID="14595165d162374308e3bb926f2ae8a113ae0f96083a8d285d2bc9eeb884f0cf" Jan 23 18:33:00 crc kubenswrapper[4760]: I0123 18:33:00.807042 4760 scope.go:117] "RemoveContainer" containerID="3d9499f3274fd120b8db2e406f42ae3bac48a2b25f53bef74fa3938efab4ee2e" Jan 23 18:33:00 crc kubenswrapper[4760]: I0123 18:33:00.843223 4760 scope.go:117] "RemoveContainer" containerID="28f41d6432980515ec2213a5cbf422a665c4f13ef0f4950cdc229e2f0bd731cf" Jan 23 18:33:00 crc kubenswrapper[4760]: I0123 18:33:00.915849 4760 scope.go:117] "RemoveContainer" containerID="bba34d5bf51831bcb252fbbb48f54e6c3f1ff4f5e4ab386aa2ea9b2b93cb2116" Jan 23 18:33:10 crc kubenswrapper[4760]: I0123 18:33:10.048856 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rh87v"] Jan 23 18:33:10 crc kubenswrapper[4760]: I0123 18:33:10.060212 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rh87v"] Jan 23 18:33:11 crc kubenswrapper[4760]: I0123 18:33:11.051195 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4fvwn"] Jan 23 18:33:11 crc kubenswrapper[4760]: I0123 18:33:11.062645 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4fvwn"] Jan 23 18:33:11 crc kubenswrapper[4760]: I0123 18:33:11.606207 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219398d6-2967-4d15-b78a-3ee0165aff71" path="/var/lib/kubelet/pods/219398d6-2967-4d15-b78a-3ee0165aff71/volumes" Jan 23 18:33:11 crc kubenswrapper[4760]: I0123 18:33:11.606824 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad627578-31c9-4d00-88a8-4dae148f1ae5" path="/var/lib/kubelet/pods/ad627578-31c9-4d00-88a8-4dae148f1ae5/volumes" Jan 23 18:33:53 crc kubenswrapper[4760]: I0123 18:33:53.056460 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lhxxf"] Jan 23 18:33:53 crc kubenswrapper[4760]: I0123 18:33:53.064291 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lhxxf"] Jan 23 18:33:53 crc kubenswrapper[4760]: I0123 18:33:53.612854 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a4fd76-73c7-4bec-bffe-0a8dbb15cf59" path="/var/lib/kubelet/pods/34a4fd76-73c7-4bec-bffe-0a8dbb15cf59/volumes" Jan 23 18:34:01 crc kubenswrapper[4760]: I0123 18:34:01.046433 4760 scope.go:117] "RemoveContainer" containerID="842d6fd81fe30f309e3a1a58601c9ea55d4eb0d41d68e6ecc2d91710b2419583" Jan 23 18:34:01 crc kubenswrapper[4760]: I0123 18:34:01.093496 4760 scope.go:117] "RemoveContainer" containerID="ea0873daf47c8302d4dd187b05afc8c846e5c646ed98de0a787d6749e372317c" Jan 23 18:34:01 crc kubenswrapper[4760]: I0123 18:34:01.148090 4760 scope.go:117] "RemoveContainer" containerID="dd2feac632a228c0df974e1063166cb08920d555a1e3ac5939ab742e36a0251b" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.121128 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fr7cm"] Jan 23 18:34:38 crc kubenswrapper[4760]: E0123 18:34:38.122726 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55ed1f9-ccbe-4968-a629-e0cfbc64ac47" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.122839 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55ed1f9-ccbe-4968-a629-e0cfbc64ac47" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.123283 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55ed1f9-ccbe-4968-a629-e0cfbc64ac47" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.125882 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.143965 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fr7cm"] Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.273537 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-utilities\") pod \"redhat-operators-fr7cm\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.273595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6rqk\" (UniqueName: \"kubernetes.io/projected/73f256d1-7894-45ad-8fd5-e398f6001d34-kube-api-access-f6rqk\") pod \"redhat-operators-fr7cm\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.273791 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-catalog-content\") pod \"redhat-operators-fr7cm\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.375842 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-utilities\") pod \"redhat-operators-fr7cm\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.375907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6rqk\" (UniqueName: \"kubernetes.io/projected/73f256d1-7894-45ad-8fd5-e398f6001d34-kube-api-access-f6rqk\") pod \"redhat-operators-fr7cm\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.375943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-catalog-content\") pod \"redhat-operators-fr7cm\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.376575 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-catalog-content\") pod \"redhat-operators-fr7cm\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.377197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-utilities\") pod \"redhat-operators-fr7cm\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.400495 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6rqk\" (UniqueName: \"kubernetes.io/projected/73f256d1-7894-45ad-8fd5-e398f6001d34-kube-api-access-f6rqk\") pod \"redhat-operators-fr7cm\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.449135 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:38 crc kubenswrapper[4760]: I0123 18:34:38.935571 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fr7cm"] Jan 23 18:34:39 crc kubenswrapper[4760]: I0123 18:34:39.179495 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7cm" event={"ID":"73f256d1-7894-45ad-8fd5-e398f6001d34","Type":"ContainerStarted","Data":"91a837901b6892e775fd8fc0c16a33b61c5f87ad9d3a33ae5ec33f2eb53e3daf"} Jan 23 18:34:40 crc kubenswrapper[4760]: I0123 18:34:40.190994 4760 generic.go:334] "Generic (PLEG): container finished" podID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerID="2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1" exitCode=0 Jan 23 18:34:40 crc kubenswrapper[4760]: I0123 18:34:40.191072 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7cm" event={"ID":"73f256d1-7894-45ad-8fd5-e398f6001d34","Type":"ContainerDied","Data":"2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1"} Jan 23 18:34:40 crc kubenswrapper[4760]: I0123 18:34:40.193351 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:34:42 crc kubenswrapper[4760]: I0123 18:34:42.208974 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7cm" event={"ID":"73f256d1-7894-45ad-8fd5-e398f6001d34","Type":"ContainerStarted","Data":"54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89"} Jan 23 18:34:45 crc kubenswrapper[4760]: I0123 18:34:45.232991 4760 generic.go:334] "Generic (PLEG): container finished" podID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerID="54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89" exitCode=0 Jan 23 18:34:45 crc kubenswrapper[4760]: I0123 18:34:45.233138 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7cm" event={"ID":"73f256d1-7894-45ad-8fd5-e398f6001d34","Type":"ContainerDied","Data":"54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89"} Jan 23 18:34:47 crc kubenswrapper[4760]: I0123 18:34:47.263127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7cm" event={"ID":"73f256d1-7894-45ad-8fd5-e398f6001d34","Type":"ContainerStarted","Data":"0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906"} Jan 23 18:34:47 crc kubenswrapper[4760]: I0123 18:34:47.285620 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fr7cm" podStartSLOduration=2.647813845 podStartE2EDuration="9.285601465s" podCreationTimestamp="2026-01-23 18:34:38 +0000 UTC" firstStartedPulling="2026-01-23 18:34:40.193076298 +0000 UTC m=+2023.195534231" lastFinishedPulling="2026-01-23 18:34:46.830863918 +0000 UTC m=+2029.833321851" observedRunningTime="2026-01-23 18:34:47.284089773 +0000 UTC m=+2030.286547736" watchObservedRunningTime="2026-01-23 18:34:47.285601465 +0000 UTC m=+2030.288059408" Jan 23 18:34:48 crc kubenswrapper[4760]: I0123 18:34:48.449559 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:48 crc kubenswrapper[4760]: I0123 18:34:48.449615 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:49 crc kubenswrapper[4760]: I0123 18:34:49.507751 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fr7cm" podUID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerName="registry-server" probeResult="failure" output=< Jan 23 18:34:49 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 23 18:34:49 crc kubenswrapper[4760]: > Jan 23 18:34:58 crc kubenswrapper[4760]: I0123 18:34:58.504477 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:58 crc kubenswrapper[4760]: I0123 18:34:58.570798 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:34:58 crc kubenswrapper[4760]: I0123 18:34:58.743007 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fr7cm"] Jan 23 18:35:00 crc kubenswrapper[4760]: I0123 18:35:00.391485 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fr7cm" podUID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerName="registry-server" containerID="cri-o://0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906" gracePeriod=2 Jan 23 18:35:00 crc kubenswrapper[4760]: I0123 18:35:00.925027 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.016200 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-utilities\") pod \"73f256d1-7894-45ad-8fd5-e398f6001d34\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.016323 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6rqk\" (UniqueName: \"kubernetes.io/projected/73f256d1-7894-45ad-8fd5-e398f6001d34-kube-api-access-f6rqk\") pod \"73f256d1-7894-45ad-8fd5-e398f6001d34\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.016460 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-catalog-content\") pod \"73f256d1-7894-45ad-8fd5-e398f6001d34\" (UID: \"73f256d1-7894-45ad-8fd5-e398f6001d34\") " Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.017245 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-utilities" (OuterVolumeSpecName: "utilities") pod "73f256d1-7894-45ad-8fd5-e398f6001d34" (UID: "73f256d1-7894-45ad-8fd5-e398f6001d34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.021449 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73f256d1-7894-45ad-8fd5-e398f6001d34-kube-api-access-f6rqk" (OuterVolumeSpecName: "kube-api-access-f6rqk") pod "73f256d1-7894-45ad-8fd5-e398f6001d34" (UID: "73f256d1-7894-45ad-8fd5-e398f6001d34"). InnerVolumeSpecName "kube-api-access-f6rqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.119059 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6rqk\" (UniqueName: \"kubernetes.io/projected/73f256d1-7894-45ad-8fd5-e398f6001d34-kube-api-access-f6rqk\") on node \"crc\" DevicePath \"\"" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.119293 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.136057 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73f256d1-7894-45ad-8fd5-e398f6001d34" (UID: "73f256d1-7894-45ad-8fd5-e398f6001d34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.221194 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f256d1-7894-45ad-8fd5-e398f6001d34-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.405558 4760 generic.go:334] "Generic (PLEG): container finished" podID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerID="0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906" exitCode=0 Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.405678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7cm" event={"ID":"73f256d1-7894-45ad-8fd5-e398f6001d34","Type":"ContainerDied","Data":"0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906"} Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.405714 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7cm" event={"ID":"73f256d1-7894-45ad-8fd5-e398f6001d34","Type":"ContainerDied","Data":"91a837901b6892e775fd8fc0c16a33b61c5f87ad9d3a33ae5ec33f2eb53e3daf"} Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.405734 4760 scope.go:117] "RemoveContainer" containerID="0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.405878 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7cm" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.430761 4760 scope.go:117] "RemoveContainer" containerID="54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.457524 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fr7cm"] Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.463354 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fr7cm"] Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.491231 4760 scope.go:117] "RemoveContainer" containerID="2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.533112 4760 scope.go:117] "RemoveContainer" containerID="0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906" Jan 23 18:35:01 crc kubenswrapper[4760]: E0123 18:35:01.533593 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906\": container with ID starting with 0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906 not found: ID does not exist" containerID="0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.533653 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906"} err="failed to get container status \"0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906\": rpc error: code = NotFound desc = could not find container \"0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906\": container with ID starting with 0c61d46c7566d66e0cff26ce2fb0cec7df71c6de4eca45a7becc244841255906 not found: ID does not exist" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.533689 4760 scope.go:117] "RemoveContainer" containerID="54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89" Jan 23 18:35:01 crc kubenswrapper[4760]: E0123 18:35:01.534068 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89\": container with ID starting with 54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89 not found: ID does not exist" containerID="54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.534118 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89"} err="failed to get container status \"54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89\": rpc error: code = NotFound desc = could not find container \"54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89\": container with ID starting with 54830d64b59812176e4e30cc4985c5b942921a91fb8884335e5531e9ba1bda89 not found: ID does not exist" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.534149 4760 scope.go:117] "RemoveContainer" containerID="2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1" Jan 23 18:35:01 crc kubenswrapper[4760]: E0123 18:35:01.534425 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1\": container with ID starting with 2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1 not found: ID does not exist" containerID="2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.534460 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1"} err="failed to get container status \"2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1\": rpc error: code = NotFound desc = could not find container \"2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1\": container with ID starting with 2b0a066426f18b9e3f0f80b48f1cafe2456a0094f2e09bc08d5a38ef8a040da1 not found: ID does not exist" Jan 23 18:35:01 crc kubenswrapper[4760]: I0123 18:35:01.604554 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73f256d1-7894-45ad-8fd5-e398f6001d34" path="/var/lib/kubelet/pods/73f256d1-7894-45ad-8fd5-e398f6001d34/volumes" Jan 23 18:35:16 crc kubenswrapper[4760]: I0123 18:35:16.075773 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:35:16 crc kubenswrapper[4760]: I0123 18:35:16.076290 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:35:46 crc kubenswrapper[4760]: I0123 18:35:46.075724 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:35:46 crc kubenswrapper[4760]: I0123 18:35:46.077655 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:36:16 crc kubenswrapper[4760]: I0123 18:36:16.075141 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:36:16 crc kubenswrapper[4760]: I0123 18:36:16.075866 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:36:16 crc kubenswrapper[4760]: I0123 18:36:16.075917 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:36:16 crc kubenswrapper[4760]: I0123 18:36:16.076711 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3897c460dff3c1dd76b4a0f32540dbd4327e1c8431b496e56315435a5349f64c"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:36:16 crc kubenswrapper[4760]: I0123 18:36:16.076776 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://3897c460dff3c1dd76b4a0f32540dbd4327e1c8431b496e56315435a5349f64c" gracePeriod=600 Jan 23 18:36:18 crc kubenswrapper[4760]: I0123 18:36:18.060748 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="3897c460dff3c1dd76b4a0f32540dbd4327e1c8431b496e56315435a5349f64c" exitCode=0 Jan 23 18:36:18 crc kubenswrapper[4760]: I0123 18:36:18.060859 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"3897c460dff3c1dd76b4a0f32540dbd4327e1c8431b496e56315435a5349f64c"} Jan 23 18:36:18 crc kubenswrapper[4760]: I0123 18:36:18.061064 4760 scope.go:117] "RemoveContainer" containerID="33211cc9fdcf0fa2adfcc839866bafa98d6f8f8ccebf3adf9187156ae495d562" Jan 23 18:36:20 crc kubenswrapper[4760]: I0123 18:36:20.081140 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53"} Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.348032 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.359669 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.382171 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t4nwl"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.383592 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pdjw2"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.393453 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.400008 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.407063 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.414042 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nl6jr"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.420863 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.427805 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.436281 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g5mvz"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.444229 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npp4c"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.450890 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.457507 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.463590 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nl6jr"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.470801 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2wt4j"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.481521 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ht5k4"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.486997 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d9pzz"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.493442 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g48px"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.499920 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-msx45"] Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.608730 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f893b9-c65b-46a0-ab31-5e5a65b0774f" path="/var/lib/kubelet/pods/00f893b9-c65b-46a0-ab31-5e5a65b0774f/volumes" Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.609869 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0191ba0e-f1c2-4a80-ae05-bc968ba09aec" path="/var/lib/kubelet/pods/0191ba0e-f1c2-4a80-ae05-bc968ba09aec/volumes" Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.610599 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="299a7a0b-bd14-4c8d-98b5-41c51529c0f1" path="/var/lib/kubelet/pods/299a7a0b-bd14-4c8d-98b5-41c51529c0f1/volumes" Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.611288 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="413094cd-6ef7-4c21-b21a-c3a96d905065" path="/var/lib/kubelet/pods/413094cd-6ef7-4c21-b21a-c3a96d905065/volumes" Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.612961 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="575b672f-b4d2-4108-9136-4522c76bab27" path="/var/lib/kubelet/pods/575b672f-b4d2-4108-9136-4522c76bab27/volumes" Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.615111 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90442552-7906-4f81-917e-ea963be59436" path="/var/lib/kubelet/pods/90442552-7906-4f81-917e-ea963be59436/volumes" Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.615878 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae790af6-6150-46ef-9f6f-7d8926590585" path="/var/lib/kubelet/pods/ae790af6-6150-46ef-9f6f-7d8926590585/volumes" Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.616646 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c93ac1af-5c71-44cd-ab7b-92a3d90080ce" path="/var/lib/kubelet/pods/c93ac1af-5c71-44cd-ab7b-92a3d90080ce/volumes" Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.617342 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55ed1f9-ccbe-4968-a629-e0cfbc64ac47" path="/var/lib/kubelet/pods/f55ed1f9-ccbe-4968-a629-e0cfbc64ac47/volumes" Jan 23 18:37:27 crc kubenswrapper[4760]: I0123 18:37:27.620969 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3acfff-ecc0-44c9-8ee9-d936ce9316e0" path="/var/lib/kubelet/pods/fe3acfff-ecc0-44c9-8ee9-d936ce9316e0/volumes" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.569086 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6"] Jan 23 18:37:33 crc kubenswrapper[4760]: E0123 18:37:33.569945 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerName="registry-server" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.569960 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerName="registry-server" Jan 23 18:37:33 crc kubenswrapper[4760]: E0123 18:37:33.569986 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerName="extract-content" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.569994 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerName="extract-content" Jan 23 18:37:33 crc kubenswrapper[4760]: E0123 18:37:33.570010 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerName="extract-utilities" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.570015 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerName="extract-utilities" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.570231 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="73f256d1-7894-45ad-8fd5-e398f6001d34" containerName="registry-server" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.570913 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.572973 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.574191 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.575096 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.575110 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.575367 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.579492 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6"] Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.679370 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w55hj\" (UniqueName: \"kubernetes.io/projected/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-kube-api-access-w55hj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.679580 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.679749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.679884 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.679996 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.781462 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.781582 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.781613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.781669 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.781874 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w55hj\" (UniqueName: \"kubernetes.io/projected/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-kube-api-access-w55hj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.795701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.795700 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.795816 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.796159 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.805556 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w55hj\" (UniqueName: \"kubernetes.io/projected/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-kube-api-access-w55hj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:33 crc kubenswrapper[4760]: I0123 18:37:33.889214 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:34 crc kubenswrapper[4760]: I0123 18:37:34.411679 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6"] Jan 23 18:37:34 crc kubenswrapper[4760]: I0123 18:37:34.743358 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" event={"ID":"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927","Type":"ContainerStarted","Data":"aab680f36e05a574299ca39d33092d3d69ed561aa7c638c702886f351e5d06ef"} Jan 23 18:37:35 crc kubenswrapper[4760]: I0123 18:37:35.756864 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" event={"ID":"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927","Type":"ContainerStarted","Data":"6fea6fbbe948fab62ebe2124bf1e9b8711510b1ae1e38b85bca49c44ee0ac52b"} Jan 23 18:37:35 crc kubenswrapper[4760]: I0123 18:37:35.780111 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" podStartSLOduration=2.227602292 podStartE2EDuration="2.780090942s" podCreationTimestamp="2026-01-23 18:37:33 +0000 UTC" firstStartedPulling="2026-01-23 18:37:34.420680513 +0000 UTC m=+2197.423138446" lastFinishedPulling="2026-01-23 18:37:34.973169163 +0000 UTC m=+2197.975627096" observedRunningTime="2026-01-23 18:37:35.777510305 +0000 UTC m=+2198.779968248" watchObservedRunningTime="2026-01-23 18:37:35.780090942 +0000 UTC m=+2198.782548875" Jan 23 18:37:48 crc kubenswrapper[4760]: I0123 18:37:48.895849 4760 generic.go:334] "Generic (PLEG): container finished" podID="c6af47ac-285d-4ee9-8ab6-1aa3d98d3927" containerID="6fea6fbbe948fab62ebe2124bf1e9b8711510b1ae1e38b85bca49c44ee0ac52b" exitCode=0 Jan 23 18:37:48 crc kubenswrapper[4760]: I0123 18:37:48.895914 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" event={"ID":"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927","Type":"ContainerDied","Data":"6fea6fbbe948fab62ebe2124bf1e9b8711510b1ae1e38b85bca49c44ee0ac52b"} Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.290398 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.427426 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ceph\") pod \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.427498 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w55hj\" (UniqueName: \"kubernetes.io/projected/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-kube-api-access-w55hj\") pod \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.427530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-repo-setup-combined-ca-bundle\") pod \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.427581 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-inventory\") pod \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.427734 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ssh-key-openstack-edpm-ipam\") pod \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\" (UID: \"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927\") " Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.434244 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ceph" (OuterVolumeSpecName: "ceph") pod "c6af47ac-285d-4ee9-8ab6-1aa3d98d3927" (UID: "c6af47ac-285d-4ee9-8ab6-1aa3d98d3927"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.436662 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c6af47ac-285d-4ee9-8ab6-1aa3d98d3927" (UID: "c6af47ac-285d-4ee9-8ab6-1aa3d98d3927"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.437683 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-kube-api-access-w55hj" (OuterVolumeSpecName: "kube-api-access-w55hj") pod "c6af47ac-285d-4ee9-8ab6-1aa3d98d3927" (UID: "c6af47ac-285d-4ee9-8ab6-1aa3d98d3927"). InnerVolumeSpecName "kube-api-access-w55hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.457487 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-inventory" (OuterVolumeSpecName: "inventory") pod "c6af47ac-285d-4ee9-8ab6-1aa3d98d3927" (UID: "c6af47ac-285d-4ee9-8ab6-1aa3d98d3927"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.460699 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c6af47ac-285d-4ee9-8ab6-1aa3d98d3927" (UID: "c6af47ac-285d-4ee9-8ab6-1aa3d98d3927"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.532568 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.532611 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.532624 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w55hj\" (UniqueName: \"kubernetes.io/projected/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-kube-api-access-w55hj\") on node \"crc\" DevicePath \"\"" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.532636 4760 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.532652 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6af47ac-285d-4ee9-8ab6-1aa3d98d3927-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.913660 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" event={"ID":"c6af47ac-285d-4ee9-8ab6-1aa3d98d3927","Type":"ContainerDied","Data":"aab680f36e05a574299ca39d33092d3d69ed561aa7c638c702886f351e5d06ef"} Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.913705 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab680f36e05a574299ca39d33092d3d69ed561aa7c638c702886f351e5d06ef" Jan 23 18:37:50 crc kubenswrapper[4760]: I0123 18:37:50.913706 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.007038 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2"] Jan 23 18:37:51 crc kubenswrapper[4760]: E0123 18:37:51.007629 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6af47ac-285d-4ee9-8ab6-1aa3d98d3927" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.007660 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6af47ac-285d-4ee9-8ab6-1aa3d98d3927" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.007924 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6af47ac-285d-4ee9-8ab6-1aa3d98d3927" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.008834 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.014454 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.019511 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.019765 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.019886 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.020180 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.029457 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2"] Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.046915 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.047008 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.047080 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5ldr\" (UniqueName: \"kubernetes.io/projected/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-kube-api-access-t5ldr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.047128 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.047190 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.148212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.148297 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.148374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.148443 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.148474 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5ldr\" (UniqueName: \"kubernetes.io/projected/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-kube-api-access-t5ldr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.153449 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.153582 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.153789 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.155028 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.165908 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5ldr\" (UniqueName: \"kubernetes.io/projected/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-kube-api-access-t5ldr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.329906 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.843973 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2"] Jan 23 18:37:51 crc kubenswrapper[4760]: W0123 18:37:51.848714 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec5b08e0_3bdf_46d0_9f32_ffdfdae26f61.slice/crio-5e49dcd30dff34ab0e2ccfd95bf04cb7ab054031c5c3d58cdf522dc40f9b8eec WatchSource:0}: Error finding container 5e49dcd30dff34ab0e2ccfd95bf04cb7ab054031c5c3d58cdf522dc40f9b8eec: Status 404 returned error can't find the container with id 5e49dcd30dff34ab0e2ccfd95bf04cb7ab054031c5c3d58cdf522dc40f9b8eec Jan 23 18:37:51 crc kubenswrapper[4760]: I0123 18:37:51.921502 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" event={"ID":"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61","Type":"ContainerStarted","Data":"5e49dcd30dff34ab0e2ccfd95bf04cb7ab054031c5c3d58cdf522dc40f9b8eec"} Jan 23 18:37:52 crc kubenswrapper[4760]: I0123 18:37:52.929752 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" event={"ID":"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61","Type":"ContainerStarted","Data":"72fcbc2446315c46f5f82aa5990fa1ceef960031fc359749265c737e485fcf02"} Jan 23 18:37:52 crc kubenswrapper[4760]: I0123 18:37:52.948901 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" podStartSLOduration=2.401400681 podStartE2EDuration="2.94888068s" podCreationTimestamp="2026-01-23 18:37:50 +0000 UTC" firstStartedPulling="2026-01-23 18:37:51.851821437 +0000 UTC m=+2214.854279370" lastFinishedPulling="2026-01-23 18:37:52.399301436 +0000 UTC m=+2215.401759369" observedRunningTime="2026-01-23 18:37:52.943241324 +0000 UTC m=+2215.945699257" watchObservedRunningTime="2026-01-23 18:37:52.94888068 +0000 UTC m=+2215.951338613" Jan 23 18:38:01 crc kubenswrapper[4760]: I0123 18:38:01.338778 4760 scope.go:117] "RemoveContainer" containerID="24796fcebcc5f26a25bc4bf4164807df80f0523ab496f1d8951e505e1b30bf94" Jan 23 18:38:01 crc kubenswrapper[4760]: I0123 18:38:01.383790 4760 scope.go:117] "RemoveContainer" containerID="758560e66135aa879f1d454ca7b06833170b48ca2daa7bff82c343a79c38134b" Jan 23 18:38:01 crc kubenswrapper[4760]: I0123 18:38:01.460295 4760 scope.go:117] "RemoveContainer" containerID="aedff0aa8d09230ada9cce20ad2fef96a35053a19dbbb516a87a5fb2a1a5d067" Jan 23 18:38:01 crc kubenswrapper[4760]: I0123 18:38:01.513252 4760 scope.go:117] "RemoveContainer" containerID="3b100cd872567eaabbe54de618b35106558f2bc2f5e2a685963b793f3a53cec4" Jan 23 18:38:01 crc kubenswrapper[4760]: I0123 18:38:01.539053 4760 scope.go:117] "RemoveContainer" containerID="867aea2433423e902e1ceb0b09fbda30c4eb6c7b18b43c72871fa55e4372ae83" Jan 23 18:38:01 crc kubenswrapper[4760]: I0123 18:38:01.615091 4760 scope.go:117] "RemoveContainer" containerID="06e333271640d2ad0336195dbc2c05bf50e5cb899716fce4146a49b4993599f9" Jan 23 18:38:01 crc kubenswrapper[4760]: I0123 18:38:01.677177 4760 scope.go:117] "RemoveContainer" containerID="8bf3d5e4bd952c450d25430e9cbe3b3c75513090227325b4caccb1a2b8ae1b0a" Jan 23 18:38:46 crc kubenswrapper[4760]: I0123 18:38:46.075687 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:38:46 crc kubenswrapper[4760]: I0123 18:38:46.076254 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:39:01 crc kubenswrapper[4760]: I0123 18:39:01.816133 4760 scope.go:117] "RemoveContainer" containerID="9c4531681b8e8ed76441de459cc8be2ae4dd581d526760314fa73acca56409a0" Jan 23 18:39:01 crc kubenswrapper[4760]: I0123 18:39:01.854691 4760 scope.go:117] "RemoveContainer" containerID="43e9617e99b3444edd0d6893a4859d8c7ab78d1fcf7c573c0f21db753e78e383" Jan 23 18:39:01 crc kubenswrapper[4760]: I0123 18:39:01.929087 4760 scope.go:117] "RemoveContainer" containerID="45a02cd058c05310384b8fee43fd0cbe22ae6d482462801dac42306960a5fc82" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.075231 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.076555 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.542568 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-csdvt"] Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.544995 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.567400 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-csdvt"] Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.644632 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tplrc\" (UniqueName: \"kubernetes.io/projected/e71618fc-b2ff-4620-b147-3ad386629add-kube-api-access-tplrc\") pod \"certified-operators-csdvt\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.644676 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-utilities\") pod \"certified-operators-csdvt\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.644712 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-catalog-content\") pod \"certified-operators-csdvt\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.745961 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tplrc\" (UniqueName: \"kubernetes.io/projected/e71618fc-b2ff-4620-b147-3ad386629add-kube-api-access-tplrc\") pod \"certified-operators-csdvt\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.746006 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-utilities\") pod \"certified-operators-csdvt\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.746092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-catalog-content\") pod \"certified-operators-csdvt\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.746718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-utilities\") pod \"certified-operators-csdvt\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.746716 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-catalog-content\") pod \"certified-operators-csdvt\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.781042 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tplrc\" (UniqueName: \"kubernetes.io/projected/e71618fc-b2ff-4620-b147-3ad386629add-kube-api-access-tplrc\") pod \"certified-operators-csdvt\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:16 crc kubenswrapper[4760]: I0123 18:39:16.872310 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:17 crc kubenswrapper[4760]: I0123 18:39:17.383307 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-csdvt"] Jan 23 18:39:17 crc kubenswrapper[4760]: I0123 18:39:17.620128 4760 generic.go:334] "Generic (PLEG): container finished" podID="e71618fc-b2ff-4620-b147-3ad386629add" containerID="ac607a316bb0cd8d85342d5723299a5cc9f4884c53b592be841532ccbf3a25fd" exitCode=0 Jan 23 18:39:17 crc kubenswrapper[4760]: I0123 18:39:17.620225 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdvt" event={"ID":"e71618fc-b2ff-4620-b147-3ad386629add","Type":"ContainerDied","Data":"ac607a316bb0cd8d85342d5723299a5cc9f4884c53b592be841532ccbf3a25fd"} Jan 23 18:39:17 crc kubenswrapper[4760]: I0123 18:39:17.620484 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdvt" event={"ID":"e71618fc-b2ff-4620-b147-3ad386629add","Type":"ContainerStarted","Data":"dfaa762956406910e7f0315d74c6072e2f30b6106e08fd6d63a3383d61bd5aba"} Jan 23 18:39:19 crc kubenswrapper[4760]: I0123 18:39:19.639843 4760 generic.go:334] "Generic (PLEG): container finished" podID="e71618fc-b2ff-4620-b147-3ad386629add" containerID="723dbbe149835a041afd57e2e4991ec5af9a2cf748009b2c049095a0db78aff0" exitCode=0 Jan 23 18:39:19 crc kubenswrapper[4760]: I0123 18:39:19.639898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdvt" event={"ID":"e71618fc-b2ff-4620-b147-3ad386629add","Type":"ContainerDied","Data":"723dbbe149835a041afd57e2e4991ec5af9a2cf748009b2c049095a0db78aff0"} Jan 23 18:39:20 crc kubenswrapper[4760]: I0123 18:39:20.649894 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdvt" event={"ID":"e71618fc-b2ff-4620-b147-3ad386629add","Type":"ContainerStarted","Data":"2735709e2e72c8f32de5e8664a3f349990d77adfc1a0f0fa82c16352578528af"} Jan 23 18:39:20 crc kubenswrapper[4760]: I0123 18:39:20.684357 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-csdvt" podStartSLOduration=2.222972178 podStartE2EDuration="4.684325686s" podCreationTimestamp="2026-01-23 18:39:16 +0000 UTC" firstStartedPulling="2026-01-23 18:39:17.623910992 +0000 UTC m=+2300.626368925" lastFinishedPulling="2026-01-23 18:39:20.08526446 +0000 UTC m=+2303.087722433" observedRunningTime="2026-01-23 18:39:20.676948314 +0000 UTC m=+2303.679406257" watchObservedRunningTime="2026-01-23 18:39:20.684325686 +0000 UTC m=+2303.686783629" Jan 23 18:39:26 crc kubenswrapper[4760]: I0123 18:39:26.873641 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:26 crc kubenswrapper[4760]: I0123 18:39:26.875704 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:26 crc kubenswrapper[4760]: I0123 18:39:26.929609 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:27 crc kubenswrapper[4760]: I0123 18:39:27.766140 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:27 crc kubenswrapper[4760]: I0123 18:39:27.809174 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-csdvt"] Jan 23 18:39:29 crc kubenswrapper[4760]: I0123 18:39:29.729830 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-csdvt" podUID="e71618fc-b2ff-4620-b147-3ad386629add" containerName="registry-server" containerID="cri-o://2735709e2e72c8f32de5e8664a3f349990d77adfc1a0f0fa82c16352578528af" gracePeriod=2 Jan 23 18:39:30 crc kubenswrapper[4760]: I0123 18:39:30.739389 4760 generic.go:334] "Generic (PLEG): container finished" podID="e71618fc-b2ff-4620-b147-3ad386629add" containerID="2735709e2e72c8f32de5e8664a3f349990d77adfc1a0f0fa82c16352578528af" exitCode=0 Jan 23 18:39:30 crc kubenswrapper[4760]: I0123 18:39:30.739454 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdvt" event={"ID":"e71618fc-b2ff-4620-b147-3ad386629add","Type":"ContainerDied","Data":"2735709e2e72c8f32de5e8664a3f349990d77adfc1a0f0fa82c16352578528af"} Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.267883 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.426681 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-catalog-content\") pod \"e71618fc-b2ff-4620-b147-3ad386629add\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.427107 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tplrc\" (UniqueName: \"kubernetes.io/projected/e71618fc-b2ff-4620-b147-3ad386629add-kube-api-access-tplrc\") pod \"e71618fc-b2ff-4620-b147-3ad386629add\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.427167 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-utilities\") pod \"e71618fc-b2ff-4620-b147-3ad386629add\" (UID: \"e71618fc-b2ff-4620-b147-3ad386629add\") " Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.428147 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-utilities" (OuterVolumeSpecName: "utilities") pod "e71618fc-b2ff-4620-b147-3ad386629add" (UID: "e71618fc-b2ff-4620-b147-3ad386629add"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.436711 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e71618fc-b2ff-4620-b147-3ad386629add-kube-api-access-tplrc" (OuterVolumeSpecName: "kube-api-access-tplrc") pod "e71618fc-b2ff-4620-b147-3ad386629add" (UID: "e71618fc-b2ff-4620-b147-3ad386629add"). InnerVolumeSpecName "kube-api-access-tplrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.473043 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e71618fc-b2ff-4620-b147-3ad386629add" (UID: "e71618fc-b2ff-4620-b147-3ad386629add"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.529740 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.530004 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tplrc\" (UniqueName: \"kubernetes.io/projected/e71618fc-b2ff-4620-b147-3ad386629add-kube-api-access-tplrc\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.530102 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e71618fc-b2ff-4620-b147-3ad386629add-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.753877 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdvt" event={"ID":"e71618fc-b2ff-4620-b147-3ad386629add","Type":"ContainerDied","Data":"dfaa762956406910e7f0315d74c6072e2f30b6106e08fd6d63a3383d61bd5aba"} Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.753970 4760 scope.go:117] "RemoveContainer" containerID="2735709e2e72c8f32de5e8664a3f349990d77adfc1a0f0fa82c16352578528af" Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.754180 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-csdvt" Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.788081 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-csdvt"] Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.788775 4760 scope.go:117] "RemoveContainer" containerID="723dbbe149835a041afd57e2e4991ec5af9a2cf748009b2c049095a0db78aff0" Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.800690 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-csdvt"] Jan 23 18:39:31 crc kubenswrapper[4760]: I0123 18:39:31.823928 4760 scope.go:117] "RemoveContainer" containerID="ac607a316bb0cd8d85342d5723299a5cc9f4884c53b592be841532ccbf3a25fd" Jan 23 18:39:33 crc kubenswrapper[4760]: I0123 18:39:33.609340 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e71618fc-b2ff-4620-b147-3ad386629add" path="/var/lib/kubelet/pods/e71618fc-b2ff-4620-b147-3ad386629add/volumes" Jan 23 18:39:46 crc kubenswrapper[4760]: I0123 18:39:46.075996 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:39:46 crc kubenswrapper[4760]: I0123 18:39:46.076742 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:39:46 crc kubenswrapper[4760]: I0123 18:39:46.076793 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:39:46 crc kubenswrapper[4760]: I0123 18:39:46.077784 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:39:46 crc kubenswrapper[4760]: I0123 18:39:46.077860 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" gracePeriod=600 Jan 23 18:39:46 crc kubenswrapper[4760]: E0123 18:39:46.766146 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:39:46 crc kubenswrapper[4760]: I0123 18:39:46.885090 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" exitCode=0 Jan 23 18:39:46 crc kubenswrapper[4760]: I0123 18:39:46.885134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53"} Jan 23 18:39:46 crc kubenswrapper[4760]: I0123 18:39:46.885170 4760 scope.go:117] "RemoveContainer" containerID="3897c460dff3c1dd76b4a0f32540dbd4327e1c8431b496e56315435a5349f64c" Jan 23 18:39:46 crc kubenswrapper[4760]: I0123 18:39:46.885811 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:39:46 crc kubenswrapper[4760]: E0123 18:39:46.886064 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:39:53 crc kubenswrapper[4760]: I0123 18:39:53.945910 4760 generic.go:334] "Generic (PLEG): container finished" podID="ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61" containerID="72fcbc2446315c46f5f82aa5990fa1ceef960031fc359749265c737e485fcf02" exitCode=0 Jan 23 18:39:53 crc kubenswrapper[4760]: I0123 18:39:53.945972 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" event={"ID":"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61","Type":"ContainerDied","Data":"72fcbc2446315c46f5f82aa5990fa1ceef960031fc359749265c737e485fcf02"} Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.321790 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.440777 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-inventory\") pod \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.440897 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ssh-key-openstack-edpm-ipam\") pod \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.441026 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-bootstrap-combined-ca-bundle\") pod \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.441060 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5ldr\" (UniqueName: \"kubernetes.io/projected/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-kube-api-access-t5ldr\") pod \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.441078 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ceph\") pod \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\" (UID: \"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61\") " Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.454720 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ceph" (OuterVolumeSpecName: "ceph") pod "ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61" (UID: "ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.454748 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61" (UID: "ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.454745 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-kube-api-access-t5ldr" (OuterVolumeSpecName: "kube-api-access-t5ldr") pod "ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61" (UID: "ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61"). InnerVolumeSpecName "kube-api-access-t5ldr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.464366 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61" (UID: "ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.468299 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-inventory" (OuterVolumeSpecName: "inventory") pod "ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61" (UID: "ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.542605 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.542641 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.542651 4760 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.542661 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5ldr\" (UniqueName: \"kubernetes.io/projected/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-kube-api-access-t5ldr\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.542669 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.971514 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" event={"ID":"ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61","Type":"ContainerDied","Data":"5e49dcd30dff34ab0e2ccfd95bf04cb7ab054031c5c3d58cdf522dc40f9b8eec"} Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.971554 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e49dcd30dff34ab0e2ccfd95bf04cb7ab054031c5c3d58cdf522dc40f9b8eec" Jan 23 18:39:55 crc kubenswrapper[4760]: I0123 18:39:55.971560 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.073813 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds"] Jan 23 18:39:56 crc kubenswrapper[4760]: E0123 18:39:56.074323 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.074354 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 18:39:56 crc kubenswrapper[4760]: E0123 18:39:56.074372 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71618fc-b2ff-4620-b147-3ad386629add" containerName="extract-content" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.074382 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71618fc-b2ff-4620-b147-3ad386629add" containerName="extract-content" Jan 23 18:39:56 crc kubenswrapper[4760]: E0123 18:39:56.074399 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71618fc-b2ff-4620-b147-3ad386629add" containerName="registry-server" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.074434 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71618fc-b2ff-4620-b147-3ad386629add" containerName="registry-server" Jan 23 18:39:56 crc kubenswrapper[4760]: E0123 18:39:56.074458 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71618fc-b2ff-4620-b147-3ad386629add" containerName="extract-utilities" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.074469 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71618fc-b2ff-4620-b147-3ad386629add" containerName="extract-utilities" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.074721 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.074762 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71618fc-b2ff-4620-b147-3ad386629add" containerName="registry-server" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.075670 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.078371 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.078771 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.079049 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.081159 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.087081 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds"] Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.094015 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.154696 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.154756 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.154790 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f76b4\" (UniqueName: \"kubernetes.io/projected/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-kube-api-access-f76b4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.154886 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.257051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.257178 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.257213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.257250 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f76b4\" (UniqueName: \"kubernetes.io/projected/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-kube-api-access-f76b4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.261090 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.261260 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.261830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.295322 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f76b4\" (UniqueName: \"kubernetes.io/projected/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-kube-api-access-f76b4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.401908 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.938762 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds"] Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.943529 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:39:56 crc kubenswrapper[4760]: I0123 18:39:56.979993 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" event={"ID":"59cea3b6-c76e-4cca-9e9f-15bdeab71c63","Type":"ContainerStarted","Data":"199b67c89c3b8536edba6ea4493985fb7018674ff6cf41ea02d27b5003718669"} Jan 23 18:39:57 crc kubenswrapper[4760]: I0123 18:39:57.989583 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" event={"ID":"59cea3b6-c76e-4cca-9e9f-15bdeab71c63","Type":"ContainerStarted","Data":"c6f9363feffbe5f869ffb17daa9f231240b90f67bea90b557c6aed53ef5ed3ca"} Jan 23 18:39:58 crc kubenswrapper[4760]: I0123 18:39:58.011174 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" podStartSLOduration=1.499435507 podStartE2EDuration="2.011158781s" podCreationTimestamp="2026-01-23 18:39:56 +0000 UTC" firstStartedPulling="2026-01-23 18:39:56.943102779 +0000 UTC m=+2339.945560752" lastFinishedPulling="2026-01-23 18:39:57.454826073 +0000 UTC m=+2340.457284026" observedRunningTime="2026-01-23 18:39:58.007124247 +0000 UTC m=+2341.009582190" watchObservedRunningTime="2026-01-23 18:39:58.011158781 +0000 UTC m=+2341.013616714" Jan 23 18:40:00 crc kubenswrapper[4760]: I0123 18:40:00.596211 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:40:00 crc kubenswrapper[4760]: E0123 18:40:00.597102 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:40:08 crc kubenswrapper[4760]: I0123 18:40:08.911129 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rfxs5"] Jan 23 18:40:08 crc kubenswrapper[4760]: I0123 18:40:08.915697 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:08 crc kubenswrapper[4760]: I0123 18:40:08.925848 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfxs5"] Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.001394 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-catalog-content\") pod \"redhat-marketplace-rfxs5\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.001528 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-utilities\") pod \"redhat-marketplace-rfxs5\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.001664 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnqtd\" (UniqueName: \"kubernetes.io/projected/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-kube-api-access-pnqtd\") pod \"redhat-marketplace-rfxs5\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.102819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnqtd\" (UniqueName: \"kubernetes.io/projected/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-kube-api-access-pnqtd\") pod \"redhat-marketplace-rfxs5\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.102944 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-catalog-content\") pod \"redhat-marketplace-rfxs5\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.102988 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-utilities\") pod \"redhat-marketplace-rfxs5\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.103401 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-catalog-content\") pod \"redhat-marketplace-rfxs5\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.103546 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-utilities\") pod \"redhat-marketplace-rfxs5\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.124585 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnqtd\" (UniqueName: \"kubernetes.io/projected/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-kube-api-access-pnqtd\") pod \"redhat-marketplace-rfxs5\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.239940 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:09 crc kubenswrapper[4760]: I0123 18:40:09.742834 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfxs5"] Jan 23 18:40:10 crc kubenswrapper[4760]: I0123 18:40:10.085268 4760 generic.go:334] "Generic (PLEG): container finished" podID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerID="2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5" exitCode=0 Jan 23 18:40:10 crc kubenswrapper[4760]: I0123 18:40:10.085321 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfxs5" event={"ID":"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0","Type":"ContainerDied","Data":"2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5"} Jan 23 18:40:10 crc kubenswrapper[4760]: I0123 18:40:10.085650 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfxs5" event={"ID":"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0","Type":"ContainerStarted","Data":"011a72f5a1e976c208f7a097d690ac133b27b1372a9d62d6b1fef86b50df5b79"} Jan 23 18:40:11 crc kubenswrapper[4760]: I0123 18:40:11.096111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfxs5" event={"ID":"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0","Type":"ContainerStarted","Data":"ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0"} Jan 23 18:40:12 crc kubenswrapper[4760]: I0123 18:40:12.107358 4760 generic.go:334] "Generic (PLEG): container finished" podID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerID="ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0" exitCode=0 Jan 23 18:40:12 crc kubenswrapper[4760]: I0123 18:40:12.107400 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfxs5" event={"ID":"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0","Type":"ContainerDied","Data":"ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0"} Jan 23 18:40:13 crc kubenswrapper[4760]: I0123 18:40:13.116854 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfxs5" event={"ID":"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0","Type":"ContainerStarted","Data":"18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213"} Jan 23 18:40:13 crc kubenswrapper[4760]: I0123 18:40:13.596517 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:40:13 crc kubenswrapper[4760]: E0123 18:40:13.597198 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:40:19 crc kubenswrapper[4760]: I0123 18:40:19.240452 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:19 crc kubenswrapper[4760]: I0123 18:40:19.241123 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:19 crc kubenswrapper[4760]: I0123 18:40:19.291046 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:19 crc kubenswrapper[4760]: I0123 18:40:19.309132 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rfxs5" podStartSLOduration=8.850104624 podStartE2EDuration="11.309108372s" podCreationTimestamp="2026-01-23 18:40:08 +0000 UTC" firstStartedPulling="2026-01-23 18:40:10.086691883 +0000 UTC m=+2353.089149816" lastFinishedPulling="2026-01-23 18:40:12.545695621 +0000 UTC m=+2355.548153564" observedRunningTime="2026-01-23 18:40:13.137103238 +0000 UTC m=+2356.139561181" watchObservedRunningTime="2026-01-23 18:40:19.309108372 +0000 UTC m=+2362.311566315" Jan 23 18:40:20 crc kubenswrapper[4760]: I0123 18:40:20.234451 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:20 crc kubenswrapper[4760]: I0123 18:40:20.287852 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfxs5"] Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.199365 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rfxs5" podUID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerName="registry-server" containerID="cri-o://18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213" gracePeriod=2 Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.654259 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.766833 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-utilities\") pod \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.766913 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnqtd\" (UniqueName: \"kubernetes.io/projected/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-kube-api-access-pnqtd\") pod \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.767065 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-catalog-content\") pod \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\" (UID: \"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0\") " Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.767979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-utilities" (OuterVolumeSpecName: "utilities") pod "092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" (UID: "092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.774605 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-kube-api-access-pnqtd" (OuterVolumeSpecName: "kube-api-access-pnqtd") pod "092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" (UID: "092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0"). InnerVolumeSpecName "kube-api-access-pnqtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.788266 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" (UID: "092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.868766 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.868804 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnqtd\" (UniqueName: \"kubernetes.io/projected/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-kube-api-access-pnqtd\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:22 crc kubenswrapper[4760]: I0123 18:40:22.868815 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.212811 4760 generic.go:334] "Generic (PLEG): container finished" podID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerID="18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213" exitCode=0 Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.212895 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfxs5" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.212929 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfxs5" event={"ID":"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0","Type":"ContainerDied","Data":"18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213"} Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.213320 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfxs5" event={"ID":"092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0","Type":"ContainerDied","Data":"011a72f5a1e976c208f7a097d690ac133b27b1372a9d62d6b1fef86b50df5b79"} Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.213342 4760 scope.go:117] "RemoveContainer" containerID="18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.255836 4760 scope.go:117] "RemoveContainer" containerID="ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.274898 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfxs5"] Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.283157 4760 scope.go:117] "RemoveContainer" containerID="2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.288859 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfxs5"] Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.331784 4760 scope.go:117] "RemoveContainer" containerID="18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213" Jan 23 18:40:23 crc kubenswrapper[4760]: E0123 18:40:23.332339 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213\": container with ID starting with 18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213 not found: ID does not exist" containerID="18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.332387 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213"} err="failed to get container status \"18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213\": rpc error: code = NotFound desc = could not find container \"18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213\": container with ID starting with 18e2aadcde4420b463a3bb28d7e5bf18f16ea79d1114efb2e6993b7e6d49f213 not found: ID does not exist" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.332435 4760 scope.go:117] "RemoveContainer" containerID="ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0" Jan 23 18:40:23 crc kubenswrapper[4760]: E0123 18:40:23.333595 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0\": container with ID starting with ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0 not found: ID does not exist" containerID="ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.333646 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0"} err="failed to get container status \"ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0\": rpc error: code = NotFound desc = could not find container \"ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0\": container with ID starting with ba37d2fe1589f25c8ce253366d6aa153549e55650a26de2ab6cf41ef0e36d7a0 not found: ID does not exist" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.333674 4760 scope.go:117] "RemoveContainer" containerID="2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5" Jan 23 18:40:23 crc kubenswrapper[4760]: E0123 18:40:23.334070 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5\": container with ID starting with 2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5 not found: ID does not exist" containerID="2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.334119 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5"} err="failed to get container status \"2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5\": rpc error: code = NotFound desc = could not find container \"2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5\": container with ID starting with 2c329aa61661864efa12f5f4962e526ad001b4da3d25214d40033ee82cda27e5 not found: ID does not exist" Jan 23 18:40:23 crc kubenswrapper[4760]: I0123 18:40:23.613733 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" path="/var/lib/kubelet/pods/092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0/volumes" Jan 23 18:40:25 crc kubenswrapper[4760]: I0123 18:40:25.235268 4760 generic.go:334] "Generic (PLEG): container finished" podID="59cea3b6-c76e-4cca-9e9f-15bdeab71c63" containerID="c6f9363feffbe5f869ffb17daa9f231240b90f67bea90b557c6aed53ef5ed3ca" exitCode=0 Jan 23 18:40:25 crc kubenswrapper[4760]: I0123 18:40:25.235395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" event={"ID":"59cea3b6-c76e-4cca-9e9f-15bdeab71c63","Type":"ContainerDied","Data":"c6f9363feffbe5f869ffb17daa9f231240b90f67bea90b557c6aed53ef5ed3ca"} Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.661828 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.843806 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ssh-key-openstack-edpm-ipam\") pod \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.843961 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ceph\") pod \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.844027 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-inventory\") pod \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.844057 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f76b4\" (UniqueName: \"kubernetes.io/projected/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-kube-api-access-f76b4\") pod \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\" (UID: \"59cea3b6-c76e-4cca-9e9f-15bdeab71c63\") " Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.850523 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-kube-api-access-f76b4" (OuterVolumeSpecName: "kube-api-access-f76b4") pod "59cea3b6-c76e-4cca-9e9f-15bdeab71c63" (UID: "59cea3b6-c76e-4cca-9e9f-15bdeab71c63"). InnerVolumeSpecName "kube-api-access-f76b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.851553 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ceph" (OuterVolumeSpecName: "ceph") pod "59cea3b6-c76e-4cca-9e9f-15bdeab71c63" (UID: "59cea3b6-c76e-4cca-9e9f-15bdeab71c63"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.873105 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-inventory" (OuterVolumeSpecName: "inventory") pod "59cea3b6-c76e-4cca-9e9f-15bdeab71c63" (UID: "59cea3b6-c76e-4cca-9e9f-15bdeab71c63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.877103 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "59cea3b6-c76e-4cca-9e9f-15bdeab71c63" (UID: "59cea3b6-c76e-4cca-9e9f-15bdeab71c63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.946845 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.946898 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f76b4\" (UniqueName: \"kubernetes.io/projected/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-kube-api-access-f76b4\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.946916 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:26 crc kubenswrapper[4760]: I0123 18:40:26.946928 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59cea3b6-c76e-4cca-9e9f-15bdeab71c63-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.256509 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" event={"ID":"59cea3b6-c76e-4cca-9e9f-15bdeab71c63","Type":"ContainerDied","Data":"199b67c89c3b8536edba6ea4493985fb7018674ff6cf41ea02d27b5003718669"} Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.256560 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="199b67c89c3b8536edba6ea4493985fb7018674ff6cf41ea02d27b5003718669" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.256601 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.340740 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6"] Jan 23 18:40:27 crc kubenswrapper[4760]: E0123 18:40:27.341089 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59cea3b6-c76e-4cca-9e9f-15bdeab71c63" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.341108 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="59cea3b6-c76e-4cca-9e9f-15bdeab71c63" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:40:27 crc kubenswrapper[4760]: E0123 18:40:27.341119 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerName="extract-content" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.341125 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerName="extract-content" Jan 23 18:40:27 crc kubenswrapper[4760]: E0123 18:40:27.341144 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerName="extract-utilities" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.341152 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerName="extract-utilities" Jan 23 18:40:27 crc kubenswrapper[4760]: E0123 18:40:27.341175 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerName="registry-server" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.341184 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerName="registry-server" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.341395 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="59cea3b6-c76e-4cca-9e9f-15bdeab71c63" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.341481 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="092fd95a-a6fb-49bb-b3a2-ef0ccc4494a0" containerName="registry-server" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.342817 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.344482 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.344640 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.344701 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.347218 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.348688 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.355271 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.355320 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.355350 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lnq\" (UniqueName: \"kubernetes.io/projected/4824cd7d-8d66-48ac-bf98-f7f4ee516458-kube-api-access-z9lnq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.355391 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.355986 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6"] Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.457329 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.457382 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.457456 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9lnq\" (UniqueName: \"kubernetes.io/projected/4824cd7d-8d66-48ac-bf98-f7f4ee516458-kube-api-access-z9lnq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.457508 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.461494 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.462043 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.462071 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.478027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9lnq\" (UniqueName: \"kubernetes.io/projected/4824cd7d-8d66-48ac-bf98-f7f4ee516458-kube-api-access-z9lnq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.611708 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:40:27 crc kubenswrapper[4760]: E0123 18:40:27.612076 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:40:27 crc kubenswrapper[4760]: I0123 18:40:27.716095 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:28 crc kubenswrapper[4760]: I0123 18:40:28.236269 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6"] Jan 23 18:40:28 crc kubenswrapper[4760]: I0123 18:40:28.275157 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" event={"ID":"4824cd7d-8d66-48ac-bf98-f7f4ee516458","Type":"ContainerStarted","Data":"b5fa007dd1f24eca35f21856796969a46e9f8a11b3e604f1e9d618d584118a5f"} Jan 23 18:40:29 crc kubenswrapper[4760]: I0123 18:40:29.283530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" event={"ID":"4824cd7d-8d66-48ac-bf98-f7f4ee516458","Type":"ContainerStarted","Data":"01b337ae7ce8dcbfaf43df905d989844d6ebd511201a244981ed8d39bd9ad72d"} Jan 23 18:40:29 crc kubenswrapper[4760]: I0123 18:40:29.306757 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" podStartSLOduration=1.715387832 podStartE2EDuration="2.306740906s" podCreationTimestamp="2026-01-23 18:40:27 +0000 UTC" firstStartedPulling="2026-01-23 18:40:28.243906979 +0000 UTC m=+2371.246364912" lastFinishedPulling="2026-01-23 18:40:28.835260053 +0000 UTC m=+2371.837717986" observedRunningTime="2026-01-23 18:40:29.297524572 +0000 UTC m=+2372.299982525" watchObservedRunningTime="2026-01-23 18:40:29.306740906 +0000 UTC m=+2372.309198839" Jan 23 18:40:34 crc kubenswrapper[4760]: I0123 18:40:34.320871 4760 generic.go:334] "Generic (PLEG): container finished" podID="4824cd7d-8d66-48ac-bf98-f7f4ee516458" containerID="01b337ae7ce8dcbfaf43df905d989844d6ebd511201a244981ed8d39bd9ad72d" exitCode=0 Jan 23 18:40:34 crc kubenswrapper[4760]: I0123 18:40:34.320989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" event={"ID":"4824cd7d-8d66-48ac-bf98-f7f4ee516458","Type":"ContainerDied","Data":"01b337ae7ce8dcbfaf43df905d989844d6ebd511201a244981ed8d39bd9ad72d"} Jan 23 18:40:35 crc kubenswrapper[4760]: I0123 18:40:35.762769 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:35 crc kubenswrapper[4760]: I0123 18:40:35.911945 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ssh-key-openstack-edpm-ipam\") pod \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " Jan 23 18:40:35 crc kubenswrapper[4760]: I0123 18:40:35.912065 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-inventory\") pod \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " Jan 23 18:40:35 crc kubenswrapper[4760]: I0123 18:40:35.912176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9lnq\" (UniqueName: \"kubernetes.io/projected/4824cd7d-8d66-48ac-bf98-f7f4ee516458-kube-api-access-z9lnq\") pod \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " Jan 23 18:40:35 crc kubenswrapper[4760]: I0123 18:40:35.912784 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ceph\") pod \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\" (UID: \"4824cd7d-8d66-48ac-bf98-f7f4ee516458\") " Jan 23 18:40:35 crc kubenswrapper[4760]: I0123 18:40:35.927735 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ceph" (OuterVolumeSpecName: "ceph") pod "4824cd7d-8d66-48ac-bf98-f7f4ee516458" (UID: "4824cd7d-8d66-48ac-bf98-f7f4ee516458"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:40:35 crc kubenswrapper[4760]: I0123 18:40:35.927796 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4824cd7d-8d66-48ac-bf98-f7f4ee516458-kube-api-access-z9lnq" (OuterVolumeSpecName: "kube-api-access-z9lnq") pod "4824cd7d-8d66-48ac-bf98-f7f4ee516458" (UID: "4824cd7d-8d66-48ac-bf98-f7f4ee516458"). InnerVolumeSpecName "kube-api-access-z9lnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:40:35 crc kubenswrapper[4760]: I0123 18:40:35.943925 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4824cd7d-8d66-48ac-bf98-f7f4ee516458" (UID: "4824cd7d-8d66-48ac-bf98-f7f4ee516458"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:40:35 crc kubenswrapper[4760]: I0123 18:40:35.953760 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-inventory" (OuterVolumeSpecName: "inventory") pod "4824cd7d-8d66-48ac-bf98-f7f4ee516458" (UID: "4824cd7d-8d66-48ac-bf98-f7f4ee516458"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.015228 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9lnq\" (UniqueName: \"kubernetes.io/projected/4824cd7d-8d66-48ac-bf98-f7f4ee516458-kube-api-access-z9lnq\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.015272 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.015286 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.015299 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4824cd7d-8d66-48ac-bf98-f7f4ee516458-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.362550 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" event={"ID":"4824cd7d-8d66-48ac-bf98-f7f4ee516458","Type":"ContainerDied","Data":"b5fa007dd1f24eca35f21856796969a46e9f8a11b3e604f1e9d618d584118a5f"} Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.362613 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5fa007dd1f24eca35f21856796969a46e9f8a11b3e604f1e9d618d584118a5f" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.362695 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.412953 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz"] Jan 23 18:40:36 crc kubenswrapper[4760]: E0123 18:40:36.413392 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4824cd7d-8d66-48ac-bf98-f7f4ee516458" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.413449 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4824cd7d-8d66-48ac-bf98-f7f4ee516458" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.413670 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4824cd7d-8d66-48ac-bf98-f7f4ee516458" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.414243 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.417400 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.418680 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.418863 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.419021 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.424441 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.428999 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz"] Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.527868 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.528153 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.528204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.528267 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d8g4\" (UniqueName: \"kubernetes.io/projected/268fb02f-f216-4953-9868-e7b1d27448f2-kube-api-access-6d8g4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.630001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d8g4\" (UniqueName: \"kubernetes.io/projected/268fb02f-f216-4953-9868-e7b1d27448f2-kube-api-access-6d8g4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.630148 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.630200 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.630290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.635809 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.637618 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.641110 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.659153 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d8g4\" (UniqueName: \"kubernetes.io/projected/268fb02f-f216-4953-9868-e7b1d27448f2-kube-api-access-6d8g4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m9jvz\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:36 crc kubenswrapper[4760]: I0123 18:40:36.738287 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:40:37 crc kubenswrapper[4760]: I0123 18:40:37.300682 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz"] Jan 23 18:40:37 crc kubenswrapper[4760]: I0123 18:40:37.372913 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" event={"ID":"268fb02f-f216-4953-9868-e7b1d27448f2","Type":"ContainerStarted","Data":"78a08d3b26a468123322bf679e32c2dfbe7bbceb6a9b2871ebc5d9f110b03cb7"} Jan 23 18:40:38 crc kubenswrapper[4760]: I0123 18:40:38.381311 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" event={"ID":"268fb02f-f216-4953-9868-e7b1d27448f2","Type":"ContainerStarted","Data":"6a0217a3c6b88a3e371f6b3c26dea578ff05d0b4090cbbdbe11a573ad6e5fb36"} Jan 23 18:40:38 crc kubenswrapper[4760]: I0123 18:40:38.408629 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" podStartSLOduration=1.64568132 podStartE2EDuration="2.408610068s" podCreationTimestamp="2026-01-23 18:40:36 +0000 UTC" firstStartedPulling="2026-01-23 18:40:37.322471601 +0000 UTC m=+2380.324929564" lastFinishedPulling="2026-01-23 18:40:38.085400379 +0000 UTC m=+2381.087858312" observedRunningTime="2026-01-23 18:40:38.400136875 +0000 UTC m=+2381.402594838" watchObservedRunningTime="2026-01-23 18:40:38.408610068 +0000 UTC m=+2381.411068011" Jan 23 18:40:42 crc kubenswrapper[4760]: I0123 18:40:42.595209 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:40:42 crc kubenswrapper[4760]: E0123 18:40:42.596109 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:40:57 crc kubenswrapper[4760]: I0123 18:40:57.601649 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:40:57 crc kubenswrapper[4760]: E0123 18:40:57.602419 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:41:12 crc kubenswrapper[4760]: I0123 18:41:12.595156 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:41:12 crc kubenswrapper[4760]: E0123 18:41:12.596070 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:41:14 crc kubenswrapper[4760]: I0123 18:41:14.690476 4760 generic.go:334] "Generic (PLEG): container finished" podID="268fb02f-f216-4953-9868-e7b1d27448f2" containerID="6a0217a3c6b88a3e371f6b3c26dea578ff05d0b4090cbbdbe11a573ad6e5fb36" exitCode=0 Jan 23 18:41:14 crc kubenswrapper[4760]: I0123 18:41:14.690545 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" event={"ID":"268fb02f-f216-4953-9868-e7b1d27448f2","Type":"ContainerDied","Data":"6a0217a3c6b88a3e371f6b3c26dea578ff05d0b4090cbbdbe11a573ad6e5fb36"} Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.099512 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.218853 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-inventory\") pod \"268fb02f-f216-4953-9868-e7b1d27448f2\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.218995 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d8g4\" (UniqueName: \"kubernetes.io/projected/268fb02f-f216-4953-9868-e7b1d27448f2-kube-api-access-6d8g4\") pod \"268fb02f-f216-4953-9868-e7b1d27448f2\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.219041 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ceph\") pod \"268fb02f-f216-4953-9868-e7b1d27448f2\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.219157 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ssh-key-openstack-edpm-ipam\") pod \"268fb02f-f216-4953-9868-e7b1d27448f2\" (UID: \"268fb02f-f216-4953-9868-e7b1d27448f2\") " Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.228655 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ceph" (OuterVolumeSpecName: "ceph") pod "268fb02f-f216-4953-9868-e7b1d27448f2" (UID: "268fb02f-f216-4953-9868-e7b1d27448f2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.230599 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268fb02f-f216-4953-9868-e7b1d27448f2-kube-api-access-6d8g4" (OuterVolumeSpecName: "kube-api-access-6d8g4") pod "268fb02f-f216-4953-9868-e7b1d27448f2" (UID: "268fb02f-f216-4953-9868-e7b1d27448f2"). InnerVolumeSpecName "kube-api-access-6d8g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.247481 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "268fb02f-f216-4953-9868-e7b1d27448f2" (UID: "268fb02f-f216-4953-9868-e7b1d27448f2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.248693 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-inventory" (OuterVolumeSpecName: "inventory") pod "268fb02f-f216-4953-9868-e7b1d27448f2" (UID: "268fb02f-f216-4953-9868-e7b1d27448f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.352516 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d8g4\" (UniqueName: \"kubernetes.io/projected/268fb02f-f216-4953-9868-e7b1d27448f2-kube-api-access-6d8g4\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.352556 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.352571 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.352584 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/268fb02f-f216-4953-9868-e7b1d27448f2-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.706203 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" event={"ID":"268fb02f-f216-4953-9868-e7b1d27448f2","Type":"ContainerDied","Data":"78a08d3b26a468123322bf679e32c2dfbe7bbceb6a9b2871ebc5d9f110b03cb7"} Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.706573 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78a08d3b26a468123322bf679e32c2dfbe7bbceb6a9b2871ebc5d9f110b03cb7" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.706285 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m9jvz" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.812785 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg"] Jan 23 18:41:16 crc kubenswrapper[4760]: E0123 18:41:16.813488 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268fb02f-f216-4953-9868-e7b1d27448f2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.813592 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="268fb02f-f216-4953-9868-e7b1d27448f2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.813899 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="268fb02f-f216-4953-9868-e7b1d27448f2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.814686 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.819724 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.819825 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.819724 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.820016 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.820069 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.822014 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg"] Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.962658 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rsw\" (UniqueName: \"kubernetes.io/projected/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-kube-api-access-v4rsw\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.962728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.962754 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:16 crc kubenswrapper[4760]: I0123 18:41:16.962832 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:17 crc kubenswrapper[4760]: I0123 18:41:17.064568 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rsw\" (UniqueName: \"kubernetes.io/projected/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-kube-api-access-v4rsw\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:17 crc kubenswrapper[4760]: I0123 18:41:17.064880 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:17 crc kubenswrapper[4760]: I0123 18:41:17.064959 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:17 crc kubenswrapper[4760]: I0123 18:41:17.065135 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:17 crc kubenswrapper[4760]: I0123 18:41:17.070589 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:17 crc kubenswrapper[4760]: I0123 18:41:17.071910 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:17 crc kubenswrapper[4760]: I0123 18:41:17.071923 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:17 crc kubenswrapper[4760]: I0123 18:41:17.091278 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rsw\" (UniqueName: \"kubernetes.io/projected/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-kube-api-access-v4rsw\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:17 crc kubenswrapper[4760]: I0123 18:41:17.130006 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:18 crc kubenswrapper[4760]: I0123 18:41:18.188096 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg"] Jan 23 18:41:18 crc kubenswrapper[4760]: I0123 18:41:18.730729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" event={"ID":"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80","Type":"ContainerStarted","Data":"cb0f4d9daab68403eeecceafd006b7f4b3cf48de304b7ba80671e333b87a345d"} Jan 23 18:41:19 crc kubenswrapper[4760]: I0123 18:41:19.738226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" event={"ID":"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80","Type":"ContainerStarted","Data":"c6fef81e300ffd59ce1f73b3ac513a124e01d9c0dce681cd786ddd228c5aecc3"} Jan 23 18:41:19 crc kubenswrapper[4760]: I0123 18:41:19.754270 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" podStartSLOduration=3.329439305 podStartE2EDuration="3.754251863s" podCreationTimestamp="2026-01-23 18:41:16 +0000 UTC" firstStartedPulling="2026-01-23 18:41:18.192335918 +0000 UTC m=+2421.194793851" lastFinishedPulling="2026-01-23 18:41:18.617148466 +0000 UTC m=+2421.619606409" observedRunningTime="2026-01-23 18:41:19.753941015 +0000 UTC m=+2422.756398948" watchObservedRunningTime="2026-01-23 18:41:19.754251863 +0000 UTC m=+2422.756709796" Jan 23 18:41:23 crc kubenswrapper[4760]: I0123 18:41:23.595331 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:41:23 crc kubenswrapper[4760]: E0123 18:41:23.596108 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:41:23 crc kubenswrapper[4760]: I0123 18:41:23.778714 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80" containerID="c6fef81e300ffd59ce1f73b3ac513a124e01d9c0dce681cd786ddd228c5aecc3" exitCode=0 Jan 23 18:41:23 crc kubenswrapper[4760]: I0123 18:41:23.778751 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" event={"ID":"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80","Type":"ContainerDied","Data":"c6fef81e300ffd59ce1f73b3ac513a124e01d9c0dce681cd786ddd228c5aecc3"} Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.208747 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.325734 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4rsw\" (UniqueName: \"kubernetes.io/projected/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-kube-api-access-v4rsw\") pod \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.325854 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-inventory\") pod \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.326100 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ceph\") pod \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.326207 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ssh-key-openstack-edpm-ipam\") pod \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\" (UID: \"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80\") " Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.336650 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ceph" (OuterVolumeSpecName: "ceph") pod "6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80" (UID: "6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.336798 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-kube-api-access-v4rsw" (OuterVolumeSpecName: "kube-api-access-v4rsw") pod "6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80" (UID: "6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80"). InnerVolumeSpecName "kube-api-access-v4rsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.357290 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-inventory" (OuterVolumeSpecName: "inventory") pod "6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80" (UID: "6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.363359 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80" (UID: "6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.429669 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.429740 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.429768 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4rsw\" (UniqueName: \"kubernetes.io/projected/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-kube-api-access-v4rsw\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.429787 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.795841 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" event={"ID":"6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80","Type":"ContainerDied","Data":"cb0f4d9daab68403eeecceafd006b7f4b3cf48de304b7ba80671e333b87a345d"} Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.795889 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb0f4d9daab68403eeecceafd006b7f4b3cf48de304b7ba80671e333b87a345d" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.795893 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.873611 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5"] Jan 23 18:41:25 crc kubenswrapper[4760]: E0123 18:41:25.874054 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.874083 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.874340 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.875126 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.878047 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.878315 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.878531 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.878667 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.884888 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:41:25 crc kubenswrapper[4760]: I0123 18:41:25.891289 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5"] Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.039587 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.039661 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.039856 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mp67\" (UniqueName: \"kubernetes.io/projected/74d60425-0689-4af1-b745-22453031dcfe-kube-api-access-8mp67\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.039897 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.142868 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mp67\" (UniqueName: \"kubernetes.io/projected/74d60425-0689-4af1-b745-22453031dcfe-kube-api-access-8mp67\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.142968 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.143042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.143106 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.149887 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.150273 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.156601 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.174352 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mp67\" (UniqueName: \"kubernetes.io/projected/74d60425-0689-4af1-b745-22453031dcfe-kube-api-access-8mp67\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.197390 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.745086 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5"] Jan 23 18:41:26 crc kubenswrapper[4760]: I0123 18:41:26.804907 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" event={"ID":"74d60425-0689-4af1-b745-22453031dcfe","Type":"ContainerStarted","Data":"eb7947fd4cca41e345c0fd386cefb144d557bd2614c69a9f2cfb191c2d000274"} Jan 23 18:41:27 crc kubenswrapper[4760]: I0123 18:41:27.814568 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" event={"ID":"74d60425-0689-4af1-b745-22453031dcfe","Type":"ContainerStarted","Data":"e96debc8337ed868fbd9a55728bd0a415405306bc20fa090614a8922dbfb0467"} Jan 23 18:41:27 crc kubenswrapper[4760]: I0123 18:41:27.833604 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" podStartSLOduration=2.205261227 podStartE2EDuration="2.833584847s" podCreationTimestamp="2026-01-23 18:41:25 +0000 UTC" firstStartedPulling="2026-01-23 18:41:26.754994157 +0000 UTC m=+2429.757452090" lastFinishedPulling="2026-01-23 18:41:27.383317767 +0000 UTC m=+2430.385775710" observedRunningTime="2026-01-23 18:41:27.831944142 +0000 UTC m=+2430.834402085" watchObservedRunningTime="2026-01-23 18:41:27.833584847 +0000 UTC m=+2430.836042780" Jan 23 18:41:35 crc kubenswrapper[4760]: I0123 18:41:35.595500 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:41:35 crc kubenswrapper[4760]: E0123 18:41:35.596458 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:41:48 crc kubenswrapper[4760]: I0123 18:41:48.597028 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:41:48 crc kubenswrapper[4760]: E0123 18:41:48.598168 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:41:59 crc kubenswrapper[4760]: I0123 18:41:59.595638 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:41:59 crc kubenswrapper[4760]: E0123 18:41:59.596597 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:42:06 crc kubenswrapper[4760]: I0123 18:42:06.775810 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-585856f577-q8bpp" podUID="789429a2-8a44-4914-b54c-65e7ccaa180c" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 23 18:42:11 crc kubenswrapper[4760]: I0123 18:42:11.595856 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:42:11 crc kubenswrapper[4760]: E0123 18:42:11.596760 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:42:13 crc kubenswrapper[4760]: I0123 18:42:13.195208 4760 generic.go:334] "Generic (PLEG): container finished" podID="74d60425-0689-4af1-b745-22453031dcfe" containerID="e96debc8337ed868fbd9a55728bd0a415405306bc20fa090614a8922dbfb0467" exitCode=0 Jan 23 18:42:13 crc kubenswrapper[4760]: I0123 18:42:13.195284 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" event={"ID":"74d60425-0689-4af1-b745-22453031dcfe","Type":"ContainerDied","Data":"e96debc8337ed868fbd9a55728bd0a415405306bc20fa090614a8922dbfb0467"} Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.588870 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.693987 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ssh-key-openstack-edpm-ipam\") pod \"74d60425-0689-4af1-b745-22453031dcfe\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.694141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mp67\" (UniqueName: \"kubernetes.io/projected/74d60425-0689-4af1-b745-22453031dcfe-kube-api-access-8mp67\") pod \"74d60425-0689-4af1-b745-22453031dcfe\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.694255 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ceph\") pod \"74d60425-0689-4af1-b745-22453031dcfe\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.694370 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-inventory\") pod \"74d60425-0689-4af1-b745-22453031dcfe\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.701555 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ceph" (OuterVolumeSpecName: "ceph") pod "74d60425-0689-4af1-b745-22453031dcfe" (UID: "74d60425-0689-4af1-b745-22453031dcfe"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.701678 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d60425-0689-4af1-b745-22453031dcfe-kube-api-access-8mp67" (OuterVolumeSpecName: "kube-api-access-8mp67") pod "74d60425-0689-4af1-b745-22453031dcfe" (UID: "74d60425-0689-4af1-b745-22453031dcfe"). InnerVolumeSpecName "kube-api-access-8mp67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.726889 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-inventory" (OuterVolumeSpecName: "inventory") pod "74d60425-0689-4af1-b745-22453031dcfe" (UID: "74d60425-0689-4af1-b745-22453031dcfe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.797368 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "74d60425-0689-4af1-b745-22453031dcfe" (UID: "74d60425-0689-4af1-b745-22453031dcfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.797483 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ssh-key-openstack-edpm-ipam\") pod \"74d60425-0689-4af1-b745-22453031dcfe\" (UID: \"74d60425-0689-4af1-b745-22453031dcfe\") " Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.797974 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mp67\" (UniqueName: \"kubernetes.io/projected/74d60425-0689-4af1-b745-22453031dcfe-kube-api-access-8mp67\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.797994 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.798004 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:14 crc kubenswrapper[4760]: W0123 18:42:14.798110 4760 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/74d60425-0689-4af1-b745-22453031dcfe/volumes/kubernetes.io~secret/ssh-key-openstack-edpm-ipam Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.798133 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "74d60425-0689-4af1-b745-22453031dcfe" (UID: "74d60425-0689-4af1-b745-22453031dcfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:14 crc kubenswrapper[4760]: I0123 18:42:14.899452 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d60425-0689-4af1-b745-22453031dcfe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.218136 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" event={"ID":"74d60425-0689-4af1-b745-22453031dcfe","Type":"ContainerDied","Data":"eb7947fd4cca41e345c0fd386cefb144d557bd2614c69a9f2cfb191c2d000274"} Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.218179 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb7947fd4cca41e345c0fd386cefb144d557bd2614c69a9f2cfb191c2d000274" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.218210 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.316065 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pr2nd"] Jan 23 18:42:15 crc kubenswrapper[4760]: E0123 18:42:15.316908 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d60425-0689-4af1-b745-22453031dcfe" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.317123 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d60425-0689-4af1-b745-22453031dcfe" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.317468 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d60425-0689-4af1-b745-22453031dcfe" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.318392 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.322670 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.322746 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.322684 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.322909 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.322984 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.332465 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pr2nd"] Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.410832 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.411054 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.411094 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2287\" (UniqueName: \"kubernetes.io/projected/05548e64-64a8-42d3-8611-6b10492801d6-kube-api-access-g2287\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.411125 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ceph\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.513261 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.513351 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2287\" (UniqueName: \"kubernetes.io/projected/05548e64-64a8-42d3-8611-6b10492801d6-kube-api-access-g2287\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.513419 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ceph\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.513494 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.518363 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.519717 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ceph\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.521329 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.534371 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2287\" (UniqueName: \"kubernetes.io/projected/05548e64-64a8-42d3-8611-6b10492801d6-kube-api-access-g2287\") pod \"ssh-known-hosts-edpm-deployment-pr2nd\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:15 crc kubenswrapper[4760]: I0123 18:42:15.642231 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:16 crc kubenswrapper[4760]: I0123 18:42:16.167534 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pr2nd"] Jan 23 18:42:16 crc kubenswrapper[4760]: I0123 18:42:16.226527 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" event={"ID":"05548e64-64a8-42d3-8611-6b10492801d6","Type":"ContainerStarted","Data":"be98fe2cd00e1a55a5732677f49f9cce3767a1b5763b7f09e5d31cd1974d0b1a"} Jan 23 18:42:17 crc kubenswrapper[4760]: I0123 18:42:17.241972 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" event={"ID":"05548e64-64a8-42d3-8611-6b10492801d6","Type":"ContainerStarted","Data":"e65cc4928c3f772ad60ce155c390fc42fed766310daaa751d48fce5ad70d4999"} Jan 23 18:42:17 crc kubenswrapper[4760]: I0123 18:42:17.264339 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" podStartSLOduration=1.7336253560000001 podStartE2EDuration="2.264314589s" podCreationTimestamp="2026-01-23 18:42:15 +0000 UTC" firstStartedPulling="2026-01-23 18:42:16.1723951 +0000 UTC m=+2479.174853033" lastFinishedPulling="2026-01-23 18:42:16.703084333 +0000 UTC m=+2479.705542266" observedRunningTime="2026-01-23 18:42:17.259045644 +0000 UTC m=+2480.261503577" watchObservedRunningTime="2026-01-23 18:42:17.264314589 +0000 UTC m=+2480.266772532" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.524764 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-db8tj"] Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.527258 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.539875 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-db8tj"] Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.709425 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-catalog-content\") pod \"community-operators-db8tj\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.709493 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qspxv\" (UniqueName: \"kubernetes.io/projected/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-kube-api-access-qspxv\") pod \"community-operators-db8tj\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.709809 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-utilities\") pod \"community-operators-db8tj\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.811991 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-utilities\") pod \"community-operators-db8tj\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.812396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-catalog-content\") pod \"community-operators-db8tj\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.812541 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-utilities\") pod \"community-operators-db8tj\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.812767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-catalog-content\") pod \"community-operators-db8tj\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.813126 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qspxv\" (UniqueName: \"kubernetes.io/projected/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-kube-api-access-qspxv\") pod \"community-operators-db8tj\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.834703 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qspxv\" (UniqueName: \"kubernetes.io/projected/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-kube-api-access-qspxv\") pod \"community-operators-db8tj\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:20 crc kubenswrapper[4760]: I0123 18:42:20.866194 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:21 crc kubenswrapper[4760]: I0123 18:42:21.346026 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-db8tj"] Jan 23 18:42:22 crc kubenswrapper[4760]: I0123 18:42:22.279995 4760 generic.go:334] "Generic (PLEG): container finished" podID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerID="a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc" exitCode=0 Jan 23 18:42:22 crc kubenswrapper[4760]: I0123 18:42:22.280070 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db8tj" event={"ID":"f45a0c2c-9389-4eaa-b385-b8cf60d660d6","Type":"ContainerDied","Data":"a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc"} Jan 23 18:42:22 crc kubenswrapper[4760]: I0123 18:42:22.281046 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db8tj" event={"ID":"f45a0c2c-9389-4eaa-b385-b8cf60d660d6","Type":"ContainerStarted","Data":"4aad0aab48c0041b8cc1805560b366b4b6d935d408928f212371d653a05ac1c5"} Jan 23 18:42:23 crc kubenswrapper[4760]: I0123 18:42:23.302162 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db8tj" event={"ID":"f45a0c2c-9389-4eaa-b385-b8cf60d660d6","Type":"ContainerStarted","Data":"ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a"} Jan 23 18:42:24 crc kubenswrapper[4760]: I0123 18:42:24.311783 4760 generic.go:334] "Generic (PLEG): container finished" podID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerID="ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a" exitCode=0 Jan 23 18:42:24 crc kubenswrapper[4760]: I0123 18:42:24.311839 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db8tj" event={"ID":"f45a0c2c-9389-4eaa-b385-b8cf60d660d6","Type":"ContainerDied","Data":"ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a"} Jan 23 18:42:25 crc kubenswrapper[4760]: I0123 18:42:25.330335 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db8tj" event={"ID":"f45a0c2c-9389-4eaa-b385-b8cf60d660d6","Type":"ContainerStarted","Data":"f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f"} Jan 23 18:42:25 crc kubenswrapper[4760]: I0123 18:42:25.348510 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-db8tj" podStartSLOduration=2.8580943 podStartE2EDuration="5.348489097s" podCreationTimestamp="2026-01-23 18:42:20 +0000 UTC" firstStartedPulling="2026-01-23 18:42:22.282075895 +0000 UTC m=+2485.284533848" lastFinishedPulling="2026-01-23 18:42:24.772470712 +0000 UTC m=+2487.774928645" observedRunningTime="2026-01-23 18:42:25.344996721 +0000 UTC m=+2488.347454664" watchObservedRunningTime="2026-01-23 18:42:25.348489097 +0000 UTC m=+2488.350947050" Jan 23 18:42:26 crc kubenswrapper[4760]: I0123 18:42:26.339385 4760 generic.go:334] "Generic (PLEG): container finished" podID="05548e64-64a8-42d3-8611-6b10492801d6" containerID="e65cc4928c3f772ad60ce155c390fc42fed766310daaa751d48fce5ad70d4999" exitCode=0 Jan 23 18:42:26 crc kubenswrapper[4760]: I0123 18:42:26.339447 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" event={"ID":"05548e64-64a8-42d3-8611-6b10492801d6","Type":"ContainerDied","Data":"e65cc4928c3f772ad60ce155c390fc42fed766310daaa751d48fce5ad70d4999"} Jan 23 18:42:26 crc kubenswrapper[4760]: I0123 18:42:26.596122 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:42:26 crc kubenswrapper[4760]: E0123 18:42:26.596834 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.721200 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.843599 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ssh-key-openstack-edpm-ipam\") pod \"05548e64-64a8-42d3-8611-6b10492801d6\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.843917 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-inventory-0\") pod \"05548e64-64a8-42d3-8611-6b10492801d6\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.844024 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2287\" (UniqueName: \"kubernetes.io/projected/05548e64-64a8-42d3-8611-6b10492801d6-kube-api-access-g2287\") pod \"05548e64-64a8-42d3-8611-6b10492801d6\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.844040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ceph\") pod \"05548e64-64a8-42d3-8611-6b10492801d6\" (UID: \"05548e64-64a8-42d3-8611-6b10492801d6\") " Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.848748 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ceph" (OuterVolumeSpecName: "ceph") pod "05548e64-64a8-42d3-8611-6b10492801d6" (UID: "05548e64-64a8-42d3-8611-6b10492801d6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.849527 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05548e64-64a8-42d3-8611-6b10492801d6-kube-api-access-g2287" (OuterVolumeSpecName: "kube-api-access-g2287") pod "05548e64-64a8-42d3-8611-6b10492801d6" (UID: "05548e64-64a8-42d3-8611-6b10492801d6"). InnerVolumeSpecName "kube-api-access-g2287". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.870515 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "05548e64-64a8-42d3-8611-6b10492801d6" (UID: "05548e64-64a8-42d3-8611-6b10492801d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.875678 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "05548e64-64a8-42d3-8611-6b10492801d6" (UID: "05548e64-64a8-42d3-8611-6b10492801d6"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.946715 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.946783 4760 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.946797 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2287\" (UniqueName: \"kubernetes.io/projected/05548e64-64a8-42d3-8611-6b10492801d6-kube-api-access-g2287\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:27 crc kubenswrapper[4760]: I0123 18:42:27.946807 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05548e64-64a8-42d3-8611-6b10492801d6-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.357532 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" event={"ID":"05548e64-64a8-42d3-8611-6b10492801d6","Type":"ContainerDied","Data":"be98fe2cd00e1a55a5732677f49f9cce3767a1b5763b7f09e5d31cd1974d0b1a"} Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.358026 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be98fe2cd00e1a55a5732677f49f9cce3767a1b5763b7f09e5d31cd1974d0b1a" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.357787 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pr2nd" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.440311 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm"] Jan 23 18:42:28 crc kubenswrapper[4760]: E0123 18:42:28.440844 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05548e64-64a8-42d3-8611-6b10492801d6" containerName="ssh-known-hosts-edpm-deployment" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.440869 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="05548e64-64a8-42d3-8611-6b10492801d6" containerName="ssh-known-hosts-edpm-deployment" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.441123 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="05548e64-64a8-42d3-8611-6b10492801d6" containerName="ssh-known-hosts-edpm-deployment" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.441847 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.443542 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.444249 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.445201 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.445916 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.446523 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.455806 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm"] Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.556944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.557024 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.557170 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg5zg\" (UniqueName: \"kubernetes.io/projected/b5489ba9-2339-49ff-b4b1-5ac088f89e85-kube-api-access-cg5zg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.557398 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.658893 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.658993 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.659064 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.659114 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg5zg\" (UniqueName: \"kubernetes.io/projected/b5489ba9-2339-49ff-b4b1-5ac088f89e85-kube-api-access-cg5zg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.664234 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.670006 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.674436 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.690365 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg5zg\" (UniqueName: \"kubernetes.io/projected/b5489ba9-2339-49ff-b4b1-5ac088f89e85-kube-api-access-cg5zg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wgxkm\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:28 crc kubenswrapper[4760]: I0123 18:42:28.757848 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:29 crc kubenswrapper[4760]: I0123 18:42:29.317637 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm"] Jan 23 18:42:29 crc kubenswrapper[4760]: W0123 18:42:29.330738 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5489ba9_2339_49ff_b4b1_5ac088f89e85.slice/crio-cbac01421067ff163afe6a13a0e5f96691531c6973b79c2a59971e2e18950eed WatchSource:0}: Error finding container cbac01421067ff163afe6a13a0e5f96691531c6973b79c2a59971e2e18950eed: Status 404 returned error can't find the container with id cbac01421067ff163afe6a13a0e5f96691531c6973b79c2a59971e2e18950eed Jan 23 18:42:29 crc kubenswrapper[4760]: I0123 18:42:29.367334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" event={"ID":"b5489ba9-2339-49ff-b4b1-5ac088f89e85","Type":"ContainerStarted","Data":"cbac01421067ff163afe6a13a0e5f96691531c6973b79c2a59971e2e18950eed"} Jan 23 18:42:30 crc kubenswrapper[4760]: I0123 18:42:30.866717 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:30 crc kubenswrapper[4760]: I0123 18:42:30.867071 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:30 crc kubenswrapper[4760]: I0123 18:42:30.920896 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:31 crc kubenswrapper[4760]: I0123 18:42:31.384638 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" event={"ID":"b5489ba9-2339-49ff-b4b1-5ac088f89e85","Type":"ContainerStarted","Data":"ba48d5a457bf9d662b4b24ea52bc1f6a4a84dc3a5af5386625a38042b7457a0b"} Jan 23 18:42:31 crc kubenswrapper[4760]: I0123 18:42:31.398497 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" podStartSLOduration=2.374898272 podStartE2EDuration="3.398478442s" podCreationTimestamp="2026-01-23 18:42:28 +0000 UTC" firstStartedPulling="2026-01-23 18:42:29.33280466 +0000 UTC m=+2492.335262603" lastFinishedPulling="2026-01-23 18:42:30.35638483 +0000 UTC m=+2493.358842773" observedRunningTime="2026-01-23 18:42:31.396492928 +0000 UTC m=+2494.398950861" watchObservedRunningTime="2026-01-23 18:42:31.398478442 +0000 UTC m=+2494.400936365" Jan 23 18:42:31 crc kubenswrapper[4760]: I0123 18:42:31.431068 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:31 crc kubenswrapper[4760]: I0123 18:42:31.478149 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-db8tj"] Jan 23 18:42:33 crc kubenswrapper[4760]: I0123 18:42:33.397885 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-db8tj" podUID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerName="registry-server" containerID="cri-o://f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f" gracePeriod=2 Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.326285 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.438624 4760 generic.go:334] "Generic (PLEG): container finished" podID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerID="f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f" exitCode=0 Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.438670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db8tj" event={"ID":"f45a0c2c-9389-4eaa-b385-b8cf60d660d6","Type":"ContainerDied","Data":"f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f"} Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.438696 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db8tj" event={"ID":"f45a0c2c-9389-4eaa-b385-b8cf60d660d6","Type":"ContainerDied","Data":"4aad0aab48c0041b8cc1805560b366b4b6d935d408928f212371d653a05ac1c5"} Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.438712 4760 scope.go:117] "RemoveContainer" containerID="f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.438855 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-db8tj" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.474640 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-catalog-content\") pod \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.474817 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qspxv\" (UniqueName: \"kubernetes.io/projected/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-kube-api-access-qspxv\") pod \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.474895 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-utilities\") pod \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\" (UID: \"f45a0c2c-9389-4eaa-b385-b8cf60d660d6\") " Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.475862 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-utilities" (OuterVolumeSpecName: "utilities") pod "f45a0c2c-9389-4eaa-b385-b8cf60d660d6" (UID: "f45a0c2c-9389-4eaa-b385-b8cf60d660d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.481484 4760 scope.go:117] "RemoveContainer" containerID="ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.482657 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-kube-api-access-qspxv" (OuterVolumeSpecName: "kube-api-access-qspxv") pod "f45a0c2c-9389-4eaa-b385-b8cf60d660d6" (UID: "f45a0c2c-9389-4eaa-b385-b8cf60d660d6"). InnerVolumeSpecName "kube-api-access-qspxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.540719 4760 scope.go:117] "RemoveContainer" containerID="a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.546531 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f45a0c2c-9389-4eaa-b385-b8cf60d660d6" (UID: "f45a0c2c-9389-4eaa-b385-b8cf60d660d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.577187 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qspxv\" (UniqueName: \"kubernetes.io/projected/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-kube-api-access-qspxv\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.577225 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.577250 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45a0c2c-9389-4eaa-b385-b8cf60d660d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.578324 4760 scope.go:117] "RemoveContainer" containerID="f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f" Jan 23 18:42:34 crc kubenswrapper[4760]: E0123 18:42:34.578708 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f\": container with ID starting with f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f not found: ID does not exist" containerID="f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.578760 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f"} err="failed to get container status \"f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f\": rpc error: code = NotFound desc = could not find container \"f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f\": container with ID starting with f1655407164b61cd86756ee071e677ca843e0c27620b339bdc5f2d3d3e2f9d2f not found: ID does not exist" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.578793 4760 scope.go:117] "RemoveContainer" containerID="ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a" Jan 23 18:42:34 crc kubenswrapper[4760]: E0123 18:42:34.579197 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a\": container with ID starting with ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a not found: ID does not exist" containerID="ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.579229 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a"} err="failed to get container status \"ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a\": rpc error: code = NotFound desc = could not find container \"ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a\": container with ID starting with ad712f6124b7eed7fc3542d01bbc9109ba805f5881a5377eb5c3caaece2fd60a not found: ID does not exist" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.579253 4760 scope.go:117] "RemoveContainer" containerID="a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc" Jan 23 18:42:34 crc kubenswrapper[4760]: E0123 18:42:34.579525 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc\": container with ID starting with a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc not found: ID does not exist" containerID="a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.579561 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc"} err="failed to get container status \"a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc\": rpc error: code = NotFound desc = could not find container \"a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc\": container with ID starting with a8dced28054f3137b45c9cde32f49bc21deec4906017d6073dab5309ed16adcc not found: ID does not exist" Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.779304 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-db8tj"] Jan 23 18:42:34 crc kubenswrapper[4760]: I0123 18:42:34.787185 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-db8tj"] Jan 23 18:42:35 crc kubenswrapper[4760]: I0123 18:42:35.609217 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" path="/var/lib/kubelet/pods/f45a0c2c-9389-4eaa-b385-b8cf60d660d6/volumes" Jan 23 18:42:38 crc kubenswrapper[4760]: I0123 18:42:38.471879 4760 generic.go:334] "Generic (PLEG): container finished" podID="b5489ba9-2339-49ff-b4b1-5ac088f89e85" containerID="ba48d5a457bf9d662b4b24ea52bc1f6a4a84dc3a5af5386625a38042b7457a0b" exitCode=0 Jan 23 18:42:38 crc kubenswrapper[4760]: I0123 18:42:38.471968 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" event={"ID":"b5489ba9-2339-49ff-b4b1-5ac088f89e85","Type":"ContainerDied","Data":"ba48d5a457bf9d662b4b24ea52bc1f6a4a84dc3a5af5386625a38042b7457a0b"} Jan 23 18:42:39 crc kubenswrapper[4760]: I0123 18:42:39.898202 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.071734 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ssh-key-openstack-edpm-ipam\") pod \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.072148 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ceph\") pod \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.072205 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-inventory\") pod \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.072388 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg5zg\" (UniqueName: \"kubernetes.io/projected/b5489ba9-2339-49ff-b4b1-5ac088f89e85-kube-api-access-cg5zg\") pod \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\" (UID: \"b5489ba9-2339-49ff-b4b1-5ac088f89e85\") " Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.084682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5489ba9-2339-49ff-b4b1-5ac088f89e85-kube-api-access-cg5zg" (OuterVolumeSpecName: "kube-api-access-cg5zg") pod "b5489ba9-2339-49ff-b4b1-5ac088f89e85" (UID: "b5489ba9-2339-49ff-b4b1-5ac088f89e85"). InnerVolumeSpecName "kube-api-access-cg5zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.084695 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ceph" (OuterVolumeSpecName: "ceph") pod "b5489ba9-2339-49ff-b4b1-5ac088f89e85" (UID: "b5489ba9-2339-49ff-b4b1-5ac088f89e85"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.120450 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5489ba9-2339-49ff-b4b1-5ac088f89e85" (UID: "b5489ba9-2339-49ff-b4b1-5ac088f89e85"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.121314 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-inventory" (OuterVolumeSpecName: "inventory") pod "b5489ba9-2339-49ff-b4b1-5ac088f89e85" (UID: "b5489ba9-2339-49ff-b4b1-5ac088f89e85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.174488 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg5zg\" (UniqueName: \"kubernetes.io/projected/b5489ba9-2339-49ff-b4b1-5ac088f89e85-kube-api-access-cg5zg\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.174519 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.174533 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.174546 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5489ba9-2339-49ff-b4b1-5ac088f89e85-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.489180 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" event={"ID":"b5489ba9-2339-49ff-b4b1-5ac088f89e85","Type":"ContainerDied","Data":"cbac01421067ff163afe6a13a0e5f96691531c6973b79c2a59971e2e18950eed"} Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.489231 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbac01421067ff163afe6a13a0e5f96691531c6973b79c2a59971e2e18950eed" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.489321 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wgxkm" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.571181 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw"] Jan 23 18:42:40 crc kubenswrapper[4760]: E0123 18:42:40.571571 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerName="extract-utilities" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.571589 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerName="extract-utilities" Jan 23 18:42:40 crc kubenswrapper[4760]: E0123 18:42:40.571613 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerName="registry-server" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.571619 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerName="registry-server" Jan 23 18:42:40 crc kubenswrapper[4760]: E0123 18:42:40.571636 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5489ba9-2339-49ff-b4b1-5ac088f89e85" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.571644 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5489ba9-2339-49ff-b4b1-5ac088f89e85" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:40 crc kubenswrapper[4760]: E0123 18:42:40.571664 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerName="extract-content" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.571672 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerName="extract-content" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.571841 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5489ba9-2339-49ff-b4b1-5ac088f89e85" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.571854 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45a0c2c-9389-4eaa-b385-b8cf60d660d6" containerName="registry-server" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.572500 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.574679 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.575005 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.575123 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.575066 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.575171 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.584630 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw"] Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.685803 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.685887 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7jx5\" (UniqueName: \"kubernetes.io/projected/85969cec-7a43-4aed-9ec1-522308d222a1-kube-api-access-t7jx5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.685915 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.686069 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.787659 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.787747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.787790 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7jx5\" (UniqueName: \"kubernetes.io/projected/85969cec-7a43-4aed-9ec1-522308d222a1-kube-api-access-t7jx5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.787813 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.791221 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.791224 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.791922 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.809675 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7jx5\" (UniqueName: \"kubernetes.io/projected/85969cec-7a43-4aed-9ec1-522308d222a1-kube-api-access-t7jx5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:40 crc kubenswrapper[4760]: I0123 18:42:40.899904 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:41 crc kubenswrapper[4760]: I0123 18:42:41.482659 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw"] Jan 23 18:42:41 crc kubenswrapper[4760]: I0123 18:42:41.500582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" event={"ID":"85969cec-7a43-4aed-9ec1-522308d222a1","Type":"ContainerStarted","Data":"35957d2b41963c51ec59e73d033c2de0743a1982c9414235bd5e6f9f082fa364"} Jan 23 18:42:41 crc kubenswrapper[4760]: I0123 18:42:41.599511 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:42:41 crc kubenswrapper[4760]: E0123 18:42:41.599903 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:42:42 crc kubenswrapper[4760]: I0123 18:42:42.509754 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" event={"ID":"85969cec-7a43-4aed-9ec1-522308d222a1","Type":"ContainerStarted","Data":"5d13f2cab5998ab2c84b1d4e65b3e1223bb6beb5d97b01c4902a50ef3b4e5778"} Jan 23 18:42:42 crc kubenswrapper[4760]: I0123 18:42:42.533304 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" podStartSLOduration=2.03427335 podStartE2EDuration="2.533286307s" podCreationTimestamp="2026-01-23 18:42:40 +0000 UTC" firstStartedPulling="2026-01-23 18:42:41.487519854 +0000 UTC m=+2504.489977797" lastFinishedPulling="2026-01-23 18:42:41.986532781 +0000 UTC m=+2504.988990754" observedRunningTime="2026-01-23 18:42:42.527357483 +0000 UTC m=+2505.529815436" watchObservedRunningTime="2026-01-23 18:42:42.533286307 +0000 UTC m=+2505.535744230" Jan 23 18:42:52 crc kubenswrapper[4760]: I0123 18:42:52.581862 4760 generic.go:334] "Generic (PLEG): container finished" podID="85969cec-7a43-4aed-9ec1-522308d222a1" containerID="5d13f2cab5998ab2c84b1d4e65b3e1223bb6beb5d97b01c4902a50ef3b4e5778" exitCode=0 Jan 23 18:42:52 crc kubenswrapper[4760]: I0123 18:42:52.581942 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" event={"ID":"85969cec-7a43-4aed-9ec1-522308d222a1","Type":"ContainerDied","Data":"5d13f2cab5998ab2c84b1d4e65b3e1223bb6beb5d97b01c4902a50ef3b4e5778"} Jan 23 18:42:53 crc kubenswrapper[4760]: I0123 18:42:53.599668 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:42:53 crc kubenswrapper[4760]: E0123 18:42:53.600135 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.038063 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.138474 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7jx5\" (UniqueName: \"kubernetes.io/projected/85969cec-7a43-4aed-9ec1-522308d222a1-kube-api-access-t7jx5\") pod \"85969cec-7a43-4aed-9ec1-522308d222a1\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.138819 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ceph\") pod \"85969cec-7a43-4aed-9ec1-522308d222a1\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.138869 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-inventory\") pod \"85969cec-7a43-4aed-9ec1-522308d222a1\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.138987 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ssh-key-openstack-edpm-ipam\") pod \"85969cec-7a43-4aed-9ec1-522308d222a1\" (UID: \"85969cec-7a43-4aed-9ec1-522308d222a1\") " Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.144124 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85969cec-7a43-4aed-9ec1-522308d222a1-kube-api-access-t7jx5" (OuterVolumeSpecName: "kube-api-access-t7jx5") pod "85969cec-7a43-4aed-9ec1-522308d222a1" (UID: "85969cec-7a43-4aed-9ec1-522308d222a1"). InnerVolumeSpecName "kube-api-access-t7jx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.148229 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ceph" (OuterVolumeSpecName: "ceph") pod "85969cec-7a43-4aed-9ec1-522308d222a1" (UID: "85969cec-7a43-4aed-9ec1-522308d222a1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.164879 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-inventory" (OuterVolumeSpecName: "inventory") pod "85969cec-7a43-4aed-9ec1-522308d222a1" (UID: "85969cec-7a43-4aed-9ec1-522308d222a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.166160 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "85969cec-7a43-4aed-9ec1-522308d222a1" (UID: "85969cec-7a43-4aed-9ec1-522308d222a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.241197 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7jx5\" (UniqueName: \"kubernetes.io/projected/85969cec-7a43-4aed-9ec1-522308d222a1-kube-api-access-t7jx5\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.241478 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.241565 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.241656 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85969cec-7a43-4aed-9ec1-522308d222a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.598247 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" event={"ID":"85969cec-7a43-4aed-9ec1-522308d222a1","Type":"ContainerDied","Data":"35957d2b41963c51ec59e73d033c2de0743a1982c9414235bd5e6f9f082fa364"} Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.598276 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.598282 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35957d2b41963c51ec59e73d033c2de0743a1982c9414235bd5e6f9f082fa364" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.735821 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk"] Jan 23 18:42:54 crc kubenswrapper[4760]: E0123 18:42:54.736257 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85969cec-7a43-4aed-9ec1-522308d222a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.736276 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="85969cec-7a43-4aed-9ec1-522308d222a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.736534 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="85969cec-7a43-4aed-9ec1-522308d222a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.737279 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.740306 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.740447 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.740786 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.740789 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.740849 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.740791 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.742216 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.742751 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.748546 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk"] Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852190 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852257 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852285 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852356 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852382 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852401 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852444 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk4p2\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-kube-api-access-fk4p2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852470 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852587 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852616 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.852652 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954589 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954641 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954835 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954872 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954890 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954919 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954974 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.954990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.955005 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk4p2\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-kube-api-access-fk4p2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.955023 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.960394 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.960496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.960891 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.962154 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.962967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.964285 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.965356 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.965741 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.966668 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.969692 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.972260 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.972776 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:54 crc kubenswrapper[4760]: I0123 18:42:54.982610 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk4p2\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-kube-api-access-fk4p2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:55 crc kubenswrapper[4760]: I0123 18:42:55.058471 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:42:55 crc kubenswrapper[4760]: I0123 18:42:55.568869 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk"] Jan 23 18:42:55 crc kubenswrapper[4760]: I0123 18:42:55.608584 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" event={"ID":"826eb339-b444-455e-b66e-f0e3fa00753d","Type":"ContainerStarted","Data":"6211720fb7b4db0692c32165731a1dbb465c5c744e5aac3adda932c7f6fbed9c"} Jan 23 18:42:56 crc kubenswrapper[4760]: I0123 18:42:56.618279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" event={"ID":"826eb339-b444-455e-b66e-f0e3fa00753d","Type":"ContainerStarted","Data":"8eb3970e32a1837d6928cd09acf1ed51b1d05c833b69a026a7e3404a4f90f3f0"} Jan 23 18:42:56 crc kubenswrapper[4760]: I0123 18:42:56.636162 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" podStartSLOduration=2.20935238 podStartE2EDuration="2.636140862s" podCreationTimestamp="2026-01-23 18:42:54 +0000 UTC" firstStartedPulling="2026-01-23 18:42:55.570990853 +0000 UTC m=+2518.573448806" lastFinishedPulling="2026-01-23 18:42:55.997779355 +0000 UTC m=+2519.000237288" observedRunningTime="2026-01-23 18:42:56.635911745 +0000 UTC m=+2519.638369708" watchObservedRunningTime="2026-01-23 18:42:56.636140862 +0000 UTC m=+2519.638598795" Jan 23 18:43:06 crc kubenswrapper[4760]: I0123 18:43:06.595232 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:43:06 crc kubenswrapper[4760]: E0123 18:43:06.595968 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:43:20 crc kubenswrapper[4760]: I0123 18:43:20.595710 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:43:20 crc kubenswrapper[4760]: E0123 18:43:20.596577 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:43:31 crc kubenswrapper[4760]: I0123 18:43:31.353899 4760 generic.go:334] "Generic (PLEG): container finished" podID="826eb339-b444-455e-b66e-f0e3fa00753d" containerID="8eb3970e32a1837d6928cd09acf1ed51b1d05c833b69a026a7e3404a4f90f3f0" exitCode=0 Jan 23 18:43:31 crc kubenswrapper[4760]: I0123 18:43:31.354112 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" event={"ID":"826eb339-b444-455e-b66e-f0e3fa00753d","Type":"ContainerDied","Data":"8eb3970e32a1837d6928cd09acf1ed51b1d05c833b69a026a7e3404a4f90f3f0"} Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.711878 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872525 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-bootstrap-combined-ca-bundle\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872591 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-libvirt-combined-ca-bundle\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872647 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872667 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-neutron-metadata-combined-ca-bundle\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872717 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ovn-combined-ca-bundle\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872740 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-nova-combined-ca-bundle\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872788 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ceph\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872824 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-repo-setup-combined-ca-bundle\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872858 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ssh-key-openstack-edpm-ipam\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872906 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk4p2\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-kube-api-access-fk4p2\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872927 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872952 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-inventory\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.872993 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"826eb339-b444-455e-b66e-f0e3fa00753d\" (UID: \"826eb339-b444-455e-b66e-f0e3fa00753d\") " Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.895137 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.895211 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.895343 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.898657 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.898816 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.899169 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ceph" (OuterVolumeSpecName: "ceph") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.904823 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-kube-api-access-fk4p2" (OuterVolumeSpecName: "kube-api-access-fk4p2") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "kube-api-access-fk4p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.921606 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.921682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.921706 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.921752 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.961930 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974632 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974663 4760 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974673 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974685 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk4p2\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-kube-api-access-fk4p2\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974695 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974704 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974713 4760 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974721 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974730 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/826eb339-b444-455e-b66e-f0e3fa00753d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974740 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974750 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.974759 4760 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:32 crc kubenswrapper[4760]: I0123 18:43:32.987752 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-inventory" (OuterVolumeSpecName: "inventory") pod "826eb339-b444-455e-b66e-f0e3fa00753d" (UID: "826eb339-b444-455e-b66e-f0e3fa00753d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.076176 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/826eb339-b444-455e-b66e-f0e3fa00753d-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.374168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" event={"ID":"826eb339-b444-455e-b66e-f0e3fa00753d","Type":"ContainerDied","Data":"6211720fb7b4db0692c32165731a1dbb465c5c744e5aac3adda932c7f6fbed9c"} Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.374218 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6211720fb7b4db0692c32165731a1dbb465c5c744e5aac3adda932c7f6fbed9c" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.374296 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.523733 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72"] Jan 23 18:43:33 crc kubenswrapper[4760]: E0123 18:43:33.524166 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826eb339-b444-455e-b66e-f0e3fa00753d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.524190 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="826eb339-b444-455e-b66e-f0e3fa00753d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.524389 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="826eb339-b444-455e-b66e-f0e3fa00753d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.525663 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.529070 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.532840 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.533058 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.533134 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.533459 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.537623 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72"] Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.595535 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:43:33 crc kubenswrapper[4760]: E0123 18:43:33.595838 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.687454 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.687551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.687696 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.687845 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9scv9\" (UniqueName: \"kubernetes.io/projected/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-kube-api-access-9scv9\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.788944 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.789024 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.789083 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.789145 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9scv9\" (UniqueName: \"kubernetes.io/projected/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-kube-api-access-9scv9\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.793301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.793600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.794218 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.808324 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9scv9\" (UniqueName: \"kubernetes.io/projected/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-kube-api-access-9scv9\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:33 crc kubenswrapper[4760]: I0123 18:43:33.864853 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:34 crc kubenswrapper[4760]: I0123 18:43:34.375599 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72"] Jan 23 18:43:34 crc kubenswrapper[4760]: I0123 18:43:34.393069 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" event={"ID":"9d2d6784-f9bf-48c1-95a9-0ba4167059a6","Type":"ContainerStarted","Data":"40a40a1bece87aacd45b046ebbf6a4a68885bdf5fd341b414fc7b97396b9b7b8"} Jan 23 18:43:35 crc kubenswrapper[4760]: I0123 18:43:35.402109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" event={"ID":"9d2d6784-f9bf-48c1-95a9-0ba4167059a6","Type":"ContainerStarted","Data":"2a168e0b9e058c0186b46cf6b2b50fc5c53e70f14a03c8fdb349f4c2d08a99e2"} Jan 23 18:43:35 crc kubenswrapper[4760]: I0123 18:43:35.424088 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" podStartSLOduration=1.962888124 podStartE2EDuration="2.424066065s" podCreationTimestamp="2026-01-23 18:43:33 +0000 UTC" firstStartedPulling="2026-01-23 18:43:34.371568536 +0000 UTC m=+2557.374026469" lastFinishedPulling="2026-01-23 18:43:34.832746477 +0000 UTC m=+2557.835204410" observedRunningTime="2026-01-23 18:43:35.423375696 +0000 UTC m=+2558.425833629" watchObservedRunningTime="2026-01-23 18:43:35.424066065 +0000 UTC m=+2558.426523998" Jan 23 18:43:40 crc kubenswrapper[4760]: I0123 18:43:40.449750 4760 generic.go:334] "Generic (PLEG): container finished" podID="9d2d6784-f9bf-48c1-95a9-0ba4167059a6" containerID="2a168e0b9e058c0186b46cf6b2b50fc5c53e70f14a03c8fdb349f4c2d08a99e2" exitCode=0 Jan 23 18:43:40 crc kubenswrapper[4760]: I0123 18:43:40.449853 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" event={"ID":"9d2d6784-f9bf-48c1-95a9-0ba4167059a6","Type":"ContainerDied","Data":"2a168e0b9e058c0186b46cf6b2b50fc5c53e70f14a03c8fdb349f4c2d08a99e2"} Jan 23 18:43:41 crc kubenswrapper[4760]: I0123 18:43:41.902832 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.043030 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ssh-key-openstack-edpm-ipam\") pod \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.043128 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-inventory\") pod \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.043187 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ceph\") pod \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.043227 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9scv9\" (UniqueName: \"kubernetes.io/projected/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-kube-api-access-9scv9\") pod \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\" (UID: \"9d2d6784-f9bf-48c1-95a9-0ba4167059a6\") " Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.051061 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-kube-api-access-9scv9" (OuterVolumeSpecName: "kube-api-access-9scv9") pod "9d2d6784-f9bf-48c1-95a9-0ba4167059a6" (UID: "9d2d6784-f9bf-48c1-95a9-0ba4167059a6"). InnerVolumeSpecName "kube-api-access-9scv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.051168 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ceph" (OuterVolumeSpecName: "ceph") pod "9d2d6784-f9bf-48c1-95a9-0ba4167059a6" (UID: "9d2d6784-f9bf-48c1-95a9-0ba4167059a6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.067398 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-inventory" (OuterVolumeSpecName: "inventory") pod "9d2d6784-f9bf-48c1-95a9-0ba4167059a6" (UID: "9d2d6784-f9bf-48c1-95a9-0ba4167059a6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.074550 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9d2d6784-f9bf-48c1-95a9-0ba4167059a6" (UID: "9d2d6784-f9bf-48c1-95a9-0ba4167059a6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.145701 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.145741 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.145753 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9scv9\" (UniqueName: \"kubernetes.io/projected/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-kube-api-access-9scv9\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.145764 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2d6784-f9bf-48c1-95a9-0ba4167059a6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.471110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" event={"ID":"9d2d6784-f9bf-48c1-95a9-0ba4167059a6","Type":"ContainerDied","Data":"40a40a1bece87aacd45b046ebbf6a4a68885bdf5fd341b414fc7b97396b9b7b8"} Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.471197 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a40a1bece87aacd45b046ebbf6a4a68885bdf5fd341b414fc7b97396b9b7b8" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.471267 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.615994 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x"] Jan 23 18:43:42 crc kubenswrapper[4760]: E0123 18:43:42.616513 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2d6784-f9bf-48c1-95a9-0ba4167059a6" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.616548 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2d6784-f9bf-48c1-95a9-0ba4167059a6" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.616817 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2d6784-f9bf-48c1-95a9-0ba4167059a6" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.617530 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.620205 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.620471 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.620869 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.624009 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.626901 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.628290 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.634893 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x"] Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.669040 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.669218 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.669253 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.669325 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.669377 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfcxm\" (UniqueName: \"kubernetes.io/projected/12785b41-cc5b-4404-ac5d-42b24f3046b4-kube-api-access-tfcxm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.669496 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.770801 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.771457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.771646 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.771826 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfcxm\" (UniqueName: \"kubernetes.io/projected/12785b41-cc5b-4404-ac5d-42b24f3046b4-kube-api-access-tfcxm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.772072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.772308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.773012 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.778020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.780643 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.781262 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.781679 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.791801 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfcxm\" (UniqueName: \"kubernetes.io/projected/12785b41-cc5b-4404-ac5d-42b24f3046b4-kube-api-access-tfcxm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xpd9x\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:42 crc kubenswrapper[4760]: I0123 18:43:42.983549 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:43:43 crc kubenswrapper[4760]: I0123 18:43:43.344849 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x"] Jan 23 18:43:43 crc kubenswrapper[4760]: I0123 18:43:43.479048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" event={"ID":"12785b41-cc5b-4404-ac5d-42b24f3046b4","Type":"ContainerStarted","Data":"3c72bed568ab4c406b164e221ce9b81a1edc43fbc5904e0254a5d886b21007aa"} Jan 23 18:43:44 crc kubenswrapper[4760]: I0123 18:43:44.490702 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" event={"ID":"12785b41-cc5b-4404-ac5d-42b24f3046b4","Type":"ContainerStarted","Data":"dff3b865ca0deca6715625d0158956323c9f146352ee655ab34d3a185ac40317"} Jan 23 18:43:44 crc kubenswrapper[4760]: I0123 18:43:44.509752 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" podStartSLOduration=2.030832853 podStartE2EDuration="2.509735514s" podCreationTimestamp="2026-01-23 18:43:42 +0000 UTC" firstStartedPulling="2026-01-23 18:43:43.344183001 +0000 UTC m=+2566.346640934" lastFinishedPulling="2026-01-23 18:43:43.823085662 +0000 UTC m=+2566.825543595" observedRunningTime="2026-01-23 18:43:44.508868779 +0000 UTC m=+2567.511326712" watchObservedRunningTime="2026-01-23 18:43:44.509735514 +0000 UTC m=+2567.512193457" Jan 23 18:43:46 crc kubenswrapper[4760]: I0123 18:43:46.594923 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:43:46 crc kubenswrapper[4760]: E0123 18:43:46.595744 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:44:01 crc kubenswrapper[4760]: I0123 18:44:01.595410 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:44:01 crc kubenswrapper[4760]: E0123 18:44:01.596139 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:44:16 crc kubenswrapper[4760]: I0123 18:44:16.595044 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:44:16 crc kubenswrapper[4760]: E0123 18:44:16.595825 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:44:30 crc kubenswrapper[4760]: I0123 18:44:30.595286 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:44:30 crc kubenswrapper[4760]: E0123 18:44:30.596054 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:44:42 crc kubenswrapper[4760]: I0123 18:44:42.595584 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:44:42 crc kubenswrapper[4760]: E0123 18:44:42.596604 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:44:57 crc kubenswrapper[4760]: I0123 18:44:57.608699 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:44:58 crc kubenswrapper[4760]: I0123 18:44:58.133002 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"ab4b062cff7a4b36755049ea2e39f10cfe7fcda0eec6049c0cc5195c662200f0"} Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.149273 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8"] Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.150678 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.152492 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.152640 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.163949 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8"] Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.249933 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27rk\" (UniqueName: \"kubernetes.io/projected/6388456d-1bcc-4b94-844e-1a3f97272d66-kube-api-access-t27rk\") pod \"collect-profiles-29486565-bzbp8\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.250012 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6388456d-1bcc-4b94-844e-1a3f97272d66-config-volume\") pod \"collect-profiles-29486565-bzbp8\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.250078 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6388456d-1bcc-4b94-844e-1a3f97272d66-secret-volume\") pod \"collect-profiles-29486565-bzbp8\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.351524 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6388456d-1bcc-4b94-844e-1a3f97272d66-config-volume\") pod \"collect-profiles-29486565-bzbp8\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.351842 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6388456d-1bcc-4b94-844e-1a3f97272d66-secret-volume\") pod \"collect-profiles-29486565-bzbp8\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.351943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t27rk\" (UniqueName: \"kubernetes.io/projected/6388456d-1bcc-4b94-844e-1a3f97272d66-kube-api-access-t27rk\") pod \"collect-profiles-29486565-bzbp8\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.357344 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6388456d-1bcc-4b94-844e-1a3f97272d66-config-volume\") pod \"collect-profiles-29486565-bzbp8\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.359106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6388456d-1bcc-4b94-844e-1a3f97272d66-secret-volume\") pod \"collect-profiles-29486565-bzbp8\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.372765 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t27rk\" (UniqueName: \"kubernetes.io/projected/6388456d-1bcc-4b94-844e-1a3f97272d66-kube-api-access-t27rk\") pod \"collect-profiles-29486565-bzbp8\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.515214 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:00 crc kubenswrapper[4760]: W0123 18:45:00.959774 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6388456d_1bcc_4b94_844e_1a3f97272d66.slice/crio-d96dfc09f47adb5c75b1e06869568edda2be0a0a1c61ba4550ab629ed95c6739 WatchSource:0}: Error finding container d96dfc09f47adb5c75b1e06869568edda2be0a0a1c61ba4550ab629ed95c6739: Status 404 returned error can't find the container with id d96dfc09f47adb5c75b1e06869568edda2be0a0a1c61ba4550ab629ed95c6739 Jan 23 18:45:00 crc kubenswrapper[4760]: I0123 18:45:00.961606 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8"] Jan 23 18:45:01 crc kubenswrapper[4760]: I0123 18:45:01.163971 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" event={"ID":"6388456d-1bcc-4b94-844e-1a3f97272d66","Type":"ContainerStarted","Data":"9926087643e73ef9af9cc2d11012022186a3f100f28a730aaef3a7f67a8f259b"} Jan 23 18:45:01 crc kubenswrapper[4760]: I0123 18:45:01.164333 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" event={"ID":"6388456d-1bcc-4b94-844e-1a3f97272d66","Type":"ContainerStarted","Data":"d96dfc09f47adb5c75b1e06869568edda2be0a0a1c61ba4550ab629ed95c6739"} Jan 23 18:45:01 crc kubenswrapper[4760]: I0123 18:45:01.188274 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" podStartSLOduration=1.188247932 podStartE2EDuration="1.188247932s" podCreationTimestamp="2026-01-23 18:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:45:01.178791832 +0000 UTC m=+2644.181249785" watchObservedRunningTime="2026-01-23 18:45:01.188247932 +0000 UTC m=+2644.190705865" Jan 23 18:45:02 crc kubenswrapper[4760]: I0123 18:45:02.175503 4760 generic.go:334] "Generic (PLEG): container finished" podID="6388456d-1bcc-4b94-844e-1a3f97272d66" containerID="9926087643e73ef9af9cc2d11012022186a3f100f28a730aaef3a7f67a8f259b" exitCode=0 Jan 23 18:45:02 crc kubenswrapper[4760]: I0123 18:45:02.175576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" event={"ID":"6388456d-1bcc-4b94-844e-1a3f97272d66","Type":"ContainerDied","Data":"9926087643e73ef9af9cc2d11012022186a3f100f28a730aaef3a7f67a8f259b"} Jan 23 18:45:03 crc kubenswrapper[4760]: I0123 18:45:03.486627 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:03 crc kubenswrapper[4760]: I0123 18:45:03.619400 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6388456d-1bcc-4b94-844e-1a3f97272d66-secret-volume\") pod \"6388456d-1bcc-4b94-844e-1a3f97272d66\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " Jan 23 18:45:03 crc kubenswrapper[4760]: I0123 18:45:03.619853 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6388456d-1bcc-4b94-844e-1a3f97272d66-config-volume\") pod \"6388456d-1bcc-4b94-844e-1a3f97272d66\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " Jan 23 18:45:03 crc kubenswrapper[4760]: I0123 18:45:03.619893 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t27rk\" (UniqueName: \"kubernetes.io/projected/6388456d-1bcc-4b94-844e-1a3f97272d66-kube-api-access-t27rk\") pod \"6388456d-1bcc-4b94-844e-1a3f97272d66\" (UID: \"6388456d-1bcc-4b94-844e-1a3f97272d66\") " Jan 23 18:45:03 crc kubenswrapper[4760]: I0123 18:45:03.621787 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6388456d-1bcc-4b94-844e-1a3f97272d66-config-volume" (OuterVolumeSpecName: "config-volume") pod "6388456d-1bcc-4b94-844e-1a3f97272d66" (UID: "6388456d-1bcc-4b94-844e-1a3f97272d66"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:45:03 crc kubenswrapper[4760]: I0123 18:45:03.625471 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6388456d-1bcc-4b94-844e-1a3f97272d66-kube-api-access-t27rk" (OuterVolumeSpecName: "kube-api-access-t27rk") pod "6388456d-1bcc-4b94-844e-1a3f97272d66" (UID: "6388456d-1bcc-4b94-844e-1a3f97272d66"). InnerVolumeSpecName "kube-api-access-t27rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:45:03 crc kubenswrapper[4760]: I0123 18:45:03.625517 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6388456d-1bcc-4b94-844e-1a3f97272d66-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6388456d-1bcc-4b94-844e-1a3f97272d66" (UID: "6388456d-1bcc-4b94-844e-1a3f97272d66"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:45:03 crc kubenswrapper[4760]: I0123 18:45:03.722095 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6388456d-1bcc-4b94-844e-1a3f97272d66-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:03 crc kubenswrapper[4760]: I0123 18:45:03.722131 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t27rk\" (UniqueName: \"kubernetes.io/projected/6388456d-1bcc-4b94-844e-1a3f97272d66-kube-api-access-t27rk\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:03 crc kubenswrapper[4760]: I0123 18:45:03.722143 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6388456d-1bcc-4b94-844e-1a3f97272d66-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:04 crc kubenswrapper[4760]: I0123 18:45:04.190725 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" event={"ID":"6388456d-1bcc-4b94-844e-1a3f97272d66","Type":"ContainerDied","Data":"d96dfc09f47adb5c75b1e06869568edda2be0a0a1c61ba4550ab629ed95c6739"} Jan 23 18:45:04 crc kubenswrapper[4760]: I0123 18:45:04.190774 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d96dfc09f47adb5c75b1e06869568edda2be0a0a1c61ba4550ab629ed95c6739" Jan 23 18:45:04 crc kubenswrapper[4760]: I0123 18:45:04.190835 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8" Jan 23 18:45:04 crc kubenswrapper[4760]: I0123 18:45:04.253490 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg"] Jan 23 18:45:04 crc kubenswrapper[4760]: I0123 18:45:04.265288 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-pcprg"] Jan 23 18:45:05 crc kubenswrapper[4760]: I0123 18:45:05.608942 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff72bd3-e963-4c1b-8187-dbff951b1f8d" path="/var/lib/kubelet/pods/aff72bd3-e963-4c1b-8187-dbff951b1f8d/volumes" Jan 23 18:45:11 crc kubenswrapper[4760]: I0123 18:45:11.250259 4760 generic.go:334] "Generic (PLEG): container finished" podID="12785b41-cc5b-4404-ac5d-42b24f3046b4" containerID="dff3b865ca0deca6715625d0158956323c9f146352ee655ab34d3a185ac40317" exitCode=0 Jan 23 18:45:11 crc kubenswrapper[4760]: I0123 18:45:11.250358 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" event={"ID":"12785b41-cc5b-4404-ac5d-42b24f3046b4","Type":"ContainerDied","Data":"dff3b865ca0deca6715625d0158956323c9f146352ee655ab34d3a185ac40317"} Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.633656 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.698211 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovn-combined-ca-bundle\") pod \"12785b41-cc5b-4404-ac5d-42b24f3046b4\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.698471 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ceph\") pod \"12785b41-cc5b-4404-ac5d-42b24f3046b4\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.698511 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ssh-key-openstack-edpm-ipam\") pod \"12785b41-cc5b-4404-ac5d-42b24f3046b4\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.698581 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovncontroller-config-0\") pod \"12785b41-cc5b-4404-ac5d-42b24f3046b4\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.698607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-inventory\") pod \"12785b41-cc5b-4404-ac5d-42b24f3046b4\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.698654 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfcxm\" (UniqueName: \"kubernetes.io/projected/12785b41-cc5b-4404-ac5d-42b24f3046b4-kube-api-access-tfcxm\") pod \"12785b41-cc5b-4404-ac5d-42b24f3046b4\" (UID: \"12785b41-cc5b-4404-ac5d-42b24f3046b4\") " Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.704206 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "12785b41-cc5b-4404-ac5d-42b24f3046b4" (UID: "12785b41-cc5b-4404-ac5d-42b24f3046b4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.704707 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ceph" (OuterVolumeSpecName: "ceph") pod "12785b41-cc5b-4404-ac5d-42b24f3046b4" (UID: "12785b41-cc5b-4404-ac5d-42b24f3046b4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.704974 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12785b41-cc5b-4404-ac5d-42b24f3046b4-kube-api-access-tfcxm" (OuterVolumeSpecName: "kube-api-access-tfcxm") pod "12785b41-cc5b-4404-ac5d-42b24f3046b4" (UID: "12785b41-cc5b-4404-ac5d-42b24f3046b4"). InnerVolumeSpecName "kube-api-access-tfcxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.723119 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-inventory" (OuterVolumeSpecName: "inventory") pod "12785b41-cc5b-4404-ac5d-42b24f3046b4" (UID: "12785b41-cc5b-4404-ac5d-42b24f3046b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.723914 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "12785b41-cc5b-4404-ac5d-42b24f3046b4" (UID: "12785b41-cc5b-4404-ac5d-42b24f3046b4"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.732252 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "12785b41-cc5b-4404-ac5d-42b24f3046b4" (UID: "12785b41-cc5b-4404-ac5d-42b24f3046b4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.800748 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.800929 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.800991 4760 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.801055 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.801128 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfcxm\" (UniqueName: \"kubernetes.io/projected/12785b41-cc5b-4404-ac5d-42b24f3046b4-kube-api-access-tfcxm\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:12 crc kubenswrapper[4760]: I0123 18:45:12.801188 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12785b41-cc5b-4404-ac5d-42b24f3046b4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.264783 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" event={"ID":"12785b41-cc5b-4404-ac5d-42b24f3046b4","Type":"ContainerDied","Data":"3c72bed568ab4c406b164e221ce9b81a1edc43fbc5904e0254a5d886b21007aa"} Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.264822 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c72bed568ab4c406b164e221ce9b81a1edc43fbc5904e0254a5d886b21007aa" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.264870 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xpd9x" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.358706 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx"] Jan 23 18:45:13 crc kubenswrapper[4760]: E0123 18:45:13.363457 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12785b41-cc5b-4404-ac5d-42b24f3046b4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.363665 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="12785b41-cc5b-4404-ac5d-42b24f3046b4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 18:45:13 crc kubenswrapper[4760]: E0123 18:45:13.363765 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6388456d-1bcc-4b94-844e-1a3f97272d66" containerName="collect-profiles" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.363819 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6388456d-1bcc-4b94-844e-1a3f97272d66" containerName="collect-profiles" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.364082 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="12785b41-cc5b-4404-ac5d-42b24f3046b4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.364167 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6388456d-1bcc-4b94-844e-1a3f97272d66" containerName="collect-profiles" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.364872 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.368224 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.368270 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.369815 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.369922 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.369948 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.369955 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.370046 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.371911 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx"] Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.513222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.513345 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88jwk\" (UniqueName: \"kubernetes.io/projected/836b1ef8-b075-4321-9f13-18120bc8d010-kube-api-access-88jwk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.513371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.513393 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.513440 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.513983 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.514109 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.616647 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.616768 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88jwk\" (UniqueName: \"kubernetes.io/projected/836b1ef8-b075-4321-9f13-18120bc8d010-kube-api-access-88jwk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.616798 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.616829 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.616857 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.616933 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.616966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.621659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.621889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.622089 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.623324 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.623913 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.624306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.636342 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88jwk\" (UniqueName: \"kubernetes.io/projected/836b1ef8-b075-4321-9f13-18120bc8d010-kube-api-access-88jwk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:13 crc kubenswrapper[4760]: I0123 18:45:13.684326 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:45:14 crc kubenswrapper[4760]: I0123 18:45:14.198792 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx"] Jan 23 18:45:14 crc kubenswrapper[4760]: I0123 18:45:14.208174 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:45:14 crc kubenswrapper[4760]: I0123 18:45:14.273212 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" event={"ID":"836b1ef8-b075-4321-9f13-18120bc8d010","Type":"ContainerStarted","Data":"31b4cd7f644cf47227d4dc7c540303d01abcbe06bf62120fb7d03e6b1cd9c6b4"} Jan 23 18:45:15 crc kubenswrapper[4760]: I0123 18:45:15.308963 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" event={"ID":"836b1ef8-b075-4321-9f13-18120bc8d010","Type":"ContainerStarted","Data":"ebb53cf011ef21d09d6c950c9edb705357d71ccb2e68f54fc9e63f29e1f411a0"} Jan 23 18:45:15 crc kubenswrapper[4760]: I0123 18:45:15.331565 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" podStartSLOduration=1.885474054 podStartE2EDuration="2.331546408s" podCreationTimestamp="2026-01-23 18:45:13 +0000 UTC" firstStartedPulling="2026-01-23 18:45:14.207862622 +0000 UTC m=+2657.210320555" lastFinishedPulling="2026-01-23 18:45:14.653934986 +0000 UTC m=+2657.656392909" observedRunningTime="2026-01-23 18:45:15.325006008 +0000 UTC m=+2658.327463951" watchObservedRunningTime="2026-01-23 18:45:15.331546408 +0000 UTC m=+2658.334004351" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.424954 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qsgp2"] Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.427169 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.483834 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsgp2"] Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.499193 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-utilities\") pod \"redhat-operators-qsgp2\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.499270 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfh8\" (UniqueName: \"kubernetes.io/projected/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-kube-api-access-pwfh8\") pod \"redhat-operators-qsgp2\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.499473 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-catalog-content\") pod \"redhat-operators-qsgp2\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.601699 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-utilities\") pod \"redhat-operators-qsgp2\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.601934 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfh8\" (UniqueName: \"kubernetes.io/projected/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-kube-api-access-pwfh8\") pod \"redhat-operators-qsgp2\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.602038 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-catalog-content\") pod \"redhat-operators-qsgp2\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.603238 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-utilities\") pod \"redhat-operators-qsgp2\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.605800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-catalog-content\") pod \"redhat-operators-qsgp2\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.635385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfh8\" (UniqueName: \"kubernetes.io/projected/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-kube-api-access-pwfh8\") pod \"redhat-operators-qsgp2\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:35 crc kubenswrapper[4760]: I0123 18:45:35.748548 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:36 crc kubenswrapper[4760]: I0123 18:45:36.236913 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsgp2"] Jan 23 18:45:36 crc kubenswrapper[4760]: I0123 18:45:36.614698 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsgp2" event={"ID":"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6","Type":"ContainerStarted","Data":"7a139c89114ea8b551c9f0379568bd875d8d34903a1ab03739e2e9ac40fe6f36"} Jan 23 18:45:37 crc kubenswrapper[4760]: I0123 18:45:37.633776 4760 generic.go:334] "Generic (PLEG): container finished" podID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerID="4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5" exitCode=0 Jan 23 18:45:37 crc kubenswrapper[4760]: I0123 18:45:37.634039 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsgp2" event={"ID":"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6","Type":"ContainerDied","Data":"4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5"} Jan 23 18:45:39 crc kubenswrapper[4760]: I0123 18:45:39.661945 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsgp2" event={"ID":"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6","Type":"ContainerStarted","Data":"8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62"} Jan 23 18:45:41 crc kubenswrapper[4760]: I0123 18:45:41.678434 4760 generic.go:334] "Generic (PLEG): container finished" podID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerID="8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62" exitCode=0 Jan 23 18:45:41 crc kubenswrapper[4760]: I0123 18:45:41.678534 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsgp2" event={"ID":"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6","Type":"ContainerDied","Data":"8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62"} Jan 23 18:45:43 crc kubenswrapper[4760]: I0123 18:45:43.700543 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsgp2" event={"ID":"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6","Type":"ContainerStarted","Data":"ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd"} Jan 23 18:45:43 crc kubenswrapper[4760]: I0123 18:45:43.722549 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qsgp2" podStartSLOduration=3.864527472 podStartE2EDuration="8.722531765s" podCreationTimestamp="2026-01-23 18:45:35 +0000 UTC" firstStartedPulling="2026-01-23 18:45:37.635232428 +0000 UTC m=+2680.637690361" lastFinishedPulling="2026-01-23 18:45:42.493236721 +0000 UTC m=+2685.495694654" observedRunningTime="2026-01-23 18:45:43.718579525 +0000 UTC m=+2686.721037468" watchObservedRunningTime="2026-01-23 18:45:43.722531765 +0000 UTC m=+2686.724989708" Jan 23 18:45:45 crc kubenswrapper[4760]: I0123 18:45:45.749997 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:45 crc kubenswrapper[4760]: I0123 18:45:45.750305 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:46 crc kubenswrapper[4760]: I0123 18:45:46.821052 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qsgp2" podUID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerName="registry-server" probeResult="failure" output=< Jan 23 18:45:46 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 23 18:45:46 crc kubenswrapper[4760]: > Jan 23 18:45:55 crc kubenswrapper[4760]: I0123 18:45:55.794856 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:55 crc kubenswrapper[4760]: I0123 18:45:55.847937 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:56 crc kubenswrapper[4760]: I0123 18:45:56.037058 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsgp2"] Jan 23 18:45:56 crc kubenswrapper[4760]: I0123 18:45:56.814937 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qsgp2" podUID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerName="registry-server" containerID="cri-o://ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd" gracePeriod=2 Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.245137 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.400971 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-catalog-content\") pod \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.401042 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwfh8\" (UniqueName: \"kubernetes.io/projected/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-kube-api-access-pwfh8\") pod \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.401106 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-utilities\") pod \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\" (UID: \"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6\") " Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.401989 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-utilities" (OuterVolumeSpecName: "utilities") pod "4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" (UID: "4bf459bc-e4bc-4731-8d96-2c77eb96c6b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.407193 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-kube-api-access-pwfh8" (OuterVolumeSpecName: "kube-api-access-pwfh8") pod "4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" (UID: "4bf459bc-e4bc-4731-8d96-2c77eb96c6b6"). InnerVolumeSpecName "kube-api-access-pwfh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.507390 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwfh8\" (UniqueName: \"kubernetes.io/projected/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-kube-api-access-pwfh8\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.507453 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.521358 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" (UID: "4bf459bc-e4bc-4731-8d96-2c77eb96c6b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.608764 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.827635 4760 generic.go:334] "Generic (PLEG): container finished" podID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerID="ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd" exitCode=0 Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.827707 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsgp2" event={"ID":"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6","Type":"ContainerDied","Data":"ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd"} Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.827910 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsgp2" event={"ID":"4bf459bc-e4bc-4731-8d96-2c77eb96c6b6","Type":"ContainerDied","Data":"7a139c89114ea8b551c9f0379568bd875d8d34903a1ab03739e2e9ac40fe6f36"} Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.827763 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsgp2" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.827934 4760 scope.go:117] "RemoveContainer" containerID="ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.853173 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsgp2"] Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.859104 4760 scope.go:117] "RemoveContainer" containerID="8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.860453 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qsgp2"] Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.880133 4760 scope.go:117] "RemoveContainer" containerID="4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.916196 4760 scope.go:117] "RemoveContainer" containerID="ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd" Jan 23 18:45:57 crc kubenswrapper[4760]: E0123 18:45:57.916769 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd\": container with ID starting with ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd not found: ID does not exist" containerID="ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.916802 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd"} err="failed to get container status \"ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd\": rpc error: code = NotFound desc = could not find container \"ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd\": container with ID starting with ce6cdcd4e80453f1cbde99c2d6e82757577c1fa1ec49f9548705d4238a5789bd not found: ID does not exist" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.916825 4760 scope.go:117] "RemoveContainer" containerID="8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62" Jan 23 18:45:57 crc kubenswrapper[4760]: E0123 18:45:57.917394 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62\": container with ID starting with 8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62 not found: ID does not exist" containerID="8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.917441 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62"} err="failed to get container status \"8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62\": rpc error: code = NotFound desc = could not find container \"8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62\": container with ID starting with 8d473d147249156f6b1b2edfea1d488b7dee3ba65e7c46d62011121151099c62 not found: ID does not exist" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.917457 4760 scope.go:117] "RemoveContainer" containerID="4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5" Jan 23 18:45:57 crc kubenswrapper[4760]: E0123 18:45:57.918065 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5\": container with ID starting with 4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5 not found: ID does not exist" containerID="4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5" Jan 23 18:45:57 crc kubenswrapper[4760]: I0123 18:45:57.918098 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5"} err="failed to get container status \"4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5\": rpc error: code = NotFound desc = could not find container \"4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5\": container with ID starting with 4542064b34b9253a63779ce186a80e94af1f8918adb077f5616f40a3168552f5 not found: ID does not exist" Jan 23 18:45:59 crc kubenswrapper[4760]: I0123 18:45:59.606323 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" path="/var/lib/kubelet/pods/4bf459bc-e4bc-4731-8d96-2c77eb96c6b6/volumes" Jan 23 18:46:02 crc kubenswrapper[4760]: I0123 18:46:02.197891 4760 scope.go:117] "RemoveContainer" containerID="23bf769bba395edadd20f40b2413e91d0b4e54dfad61e24276ab090bd4610530" Jan 23 18:46:19 crc kubenswrapper[4760]: I0123 18:46:19.009367 4760 generic.go:334] "Generic (PLEG): container finished" podID="836b1ef8-b075-4321-9f13-18120bc8d010" containerID="ebb53cf011ef21d09d6c950c9edb705357d71ccb2e68f54fc9e63f29e1f411a0" exitCode=0 Jan 23 18:46:19 crc kubenswrapper[4760]: I0123 18:46:19.009465 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" event={"ID":"836b1ef8-b075-4321-9f13-18120bc8d010","Type":"ContainerDied","Data":"ebb53cf011ef21d09d6c950c9edb705357d71ccb2e68f54fc9e63f29e1f411a0"} Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.516381 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.633329 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ceph\") pod \"836b1ef8-b075-4321-9f13-18120bc8d010\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.633458 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-inventory\") pod \"836b1ef8-b075-4321-9f13-18120bc8d010\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.633578 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-nova-metadata-neutron-config-0\") pod \"836b1ef8-b075-4321-9f13-18120bc8d010\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.633684 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ssh-key-openstack-edpm-ipam\") pod \"836b1ef8-b075-4321-9f13-18120bc8d010\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.633757 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-metadata-combined-ca-bundle\") pod \"836b1ef8-b075-4321-9f13-18120bc8d010\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.633794 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-ovn-metadata-agent-neutron-config-0\") pod \"836b1ef8-b075-4321-9f13-18120bc8d010\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.633879 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88jwk\" (UniqueName: \"kubernetes.io/projected/836b1ef8-b075-4321-9f13-18120bc8d010-kube-api-access-88jwk\") pod \"836b1ef8-b075-4321-9f13-18120bc8d010\" (UID: \"836b1ef8-b075-4321-9f13-18120bc8d010\") " Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.643618 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ceph" (OuterVolumeSpecName: "ceph") pod "836b1ef8-b075-4321-9f13-18120bc8d010" (UID: "836b1ef8-b075-4321-9f13-18120bc8d010"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.643806 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "836b1ef8-b075-4321-9f13-18120bc8d010" (UID: "836b1ef8-b075-4321-9f13-18120bc8d010"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.647671 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836b1ef8-b075-4321-9f13-18120bc8d010-kube-api-access-88jwk" (OuterVolumeSpecName: "kube-api-access-88jwk") pod "836b1ef8-b075-4321-9f13-18120bc8d010" (UID: "836b1ef8-b075-4321-9f13-18120bc8d010"). InnerVolumeSpecName "kube-api-access-88jwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.676073 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-inventory" (OuterVolumeSpecName: "inventory") pod "836b1ef8-b075-4321-9f13-18120bc8d010" (UID: "836b1ef8-b075-4321-9f13-18120bc8d010"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.678270 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "836b1ef8-b075-4321-9f13-18120bc8d010" (UID: "836b1ef8-b075-4321-9f13-18120bc8d010"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.678513 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "836b1ef8-b075-4321-9f13-18120bc8d010" (UID: "836b1ef8-b075-4321-9f13-18120bc8d010"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.686628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "836b1ef8-b075-4321-9f13-18120bc8d010" (UID: "836b1ef8-b075-4321-9f13-18120bc8d010"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.736176 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.736219 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.736234 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.736245 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.736257 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.736269 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/836b1ef8-b075-4321-9f13-18120bc8d010-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:20 crc kubenswrapper[4760]: I0123 18:46:20.736283 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88jwk\" (UniqueName: \"kubernetes.io/projected/836b1ef8-b075-4321-9f13-18120bc8d010-kube-api-access-88jwk\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.031208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" event={"ID":"836b1ef8-b075-4321-9f13-18120bc8d010","Type":"ContainerDied","Data":"31b4cd7f644cf47227d4dc7c540303d01abcbe06bf62120fb7d03e6b1cd9c6b4"} Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.031667 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31b4cd7f644cf47227d4dc7c540303d01abcbe06bf62120fb7d03e6b1cd9c6b4" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.031342 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.131718 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql"] Jan 23 18:46:21 crc kubenswrapper[4760]: E0123 18:46:21.132177 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836b1ef8-b075-4321-9f13-18120bc8d010" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.132200 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="836b1ef8-b075-4321-9f13-18120bc8d010" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 18:46:21 crc kubenswrapper[4760]: E0123 18:46:21.132221 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerName="registry-server" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.132228 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerName="registry-server" Jan 23 18:46:21 crc kubenswrapper[4760]: E0123 18:46:21.132239 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerName="extract-utilities" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.132248 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerName="extract-utilities" Jan 23 18:46:21 crc kubenswrapper[4760]: E0123 18:46:21.132275 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerName="extract-content" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.132281 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerName="extract-content" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.132506 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="836b1ef8-b075-4321-9f13-18120bc8d010" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.132540 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf459bc-e4bc-4731-8d96-2c77eb96c6b6" containerName="registry-server" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.133331 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.136214 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.136482 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.138129 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.138306 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.138504 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.138718 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.159506 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql"] Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.245790 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.245854 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.246075 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.246198 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.246252 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sngqb\" (UniqueName: \"kubernetes.io/projected/538bf016-5ed3-44cd-bcf0-f59c56e01048-kube-api-access-sngqb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.246333 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.347784 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.347834 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sngqb\" (UniqueName: \"kubernetes.io/projected/538bf016-5ed3-44cd-bcf0-f59c56e01048-kube-api-access-sngqb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.347871 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.347913 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.347938 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.347996 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.352504 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.353335 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.353805 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.353800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.357925 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.368091 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sngqb\" (UniqueName: \"kubernetes.io/projected/538bf016-5ed3-44cd-bcf0-f59c56e01048-kube-api-access-sngqb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-md5ql\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:21 crc kubenswrapper[4760]: I0123 18:46:21.452250 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:46:22 crc kubenswrapper[4760]: I0123 18:46:22.027745 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql"] Jan 23 18:46:22 crc kubenswrapper[4760]: I0123 18:46:22.047211 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" event={"ID":"538bf016-5ed3-44cd-bcf0-f59c56e01048","Type":"ContainerStarted","Data":"e3bb3c6d0aa53ac6a4af4cbd45a846a3fbe397ec91f3b360f58c8da7fc2f9a78"} Jan 23 18:46:23 crc kubenswrapper[4760]: I0123 18:46:23.057345 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" event={"ID":"538bf016-5ed3-44cd-bcf0-f59c56e01048","Type":"ContainerStarted","Data":"ed88b57a7e276e3e6697bd2d41079f6270d2e205aef45e9584ee02654c58d429"} Jan 23 18:47:16 crc kubenswrapper[4760]: I0123 18:47:16.075743 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:47:16 crc kubenswrapper[4760]: I0123 18:47:16.076874 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:47:46 crc kubenswrapper[4760]: I0123 18:47:46.075602 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:47:46 crc kubenswrapper[4760]: I0123 18:47:46.076187 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:48:16 crc kubenswrapper[4760]: I0123 18:48:16.075170 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:48:16 crc kubenswrapper[4760]: I0123 18:48:16.075825 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:48:16 crc kubenswrapper[4760]: I0123 18:48:16.075876 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:48:16 crc kubenswrapper[4760]: I0123 18:48:16.076638 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab4b062cff7a4b36755049ea2e39f10cfe7fcda0eec6049c0cc5195c662200f0"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:48:16 crc kubenswrapper[4760]: I0123 18:48:16.076700 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://ab4b062cff7a4b36755049ea2e39f10cfe7fcda0eec6049c0cc5195c662200f0" gracePeriod=600 Jan 23 18:48:17 crc kubenswrapper[4760]: I0123 18:48:17.070510 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="ab4b062cff7a4b36755049ea2e39f10cfe7fcda0eec6049c0cc5195c662200f0" exitCode=0 Jan 23 18:48:17 crc kubenswrapper[4760]: I0123 18:48:17.071095 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"ab4b062cff7a4b36755049ea2e39f10cfe7fcda0eec6049c0cc5195c662200f0"} Jan 23 18:48:17 crc kubenswrapper[4760]: I0123 18:48:17.071150 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af"} Jan 23 18:48:17 crc kubenswrapper[4760]: I0123 18:48:17.071172 4760 scope.go:117] "RemoveContainer" containerID="0dea34b98bea93b86a5ca7be02bde77574869adb25ee88b66765924db9889a53" Jan 23 18:48:17 crc kubenswrapper[4760]: I0123 18:48:17.098084 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" podStartSLOduration=115.5087245 podStartE2EDuration="1m56.098062353s" podCreationTimestamp="2026-01-23 18:46:21 +0000 UTC" firstStartedPulling="2026-01-23 18:46:22.040976452 +0000 UTC m=+2725.043434385" lastFinishedPulling="2026-01-23 18:46:22.630314295 +0000 UTC m=+2725.632772238" observedRunningTime="2026-01-23 18:46:23.088453223 +0000 UTC m=+2726.090911166" watchObservedRunningTime="2026-01-23 18:48:17.098062353 +0000 UTC m=+2840.100520286" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.047268 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jxlp6"] Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.049718 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.063284 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jxlp6"] Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.165737 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t85bq\" (UniqueName: \"kubernetes.io/projected/65511cbd-4ce0-4d8d-993e-71a1a355ee17-kube-api-access-t85bq\") pod \"certified-operators-jxlp6\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.166142 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-utilities\") pod \"certified-operators-jxlp6\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.166250 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-catalog-content\") pod \"certified-operators-jxlp6\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.267705 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-utilities\") pod \"certified-operators-jxlp6\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.267783 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-catalog-content\") pod \"certified-operators-jxlp6\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.267918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t85bq\" (UniqueName: \"kubernetes.io/projected/65511cbd-4ce0-4d8d-993e-71a1a355ee17-kube-api-access-t85bq\") pod \"certified-operators-jxlp6\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.268391 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-utilities\") pod \"certified-operators-jxlp6\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.268446 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-catalog-content\") pod \"certified-operators-jxlp6\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.291582 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t85bq\" (UniqueName: \"kubernetes.io/projected/65511cbd-4ce0-4d8d-993e-71a1a355ee17-kube-api-access-t85bq\") pod \"certified-operators-jxlp6\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.381117 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:13 crc kubenswrapper[4760]: I0123 18:50:13.925662 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jxlp6"] Jan 23 18:50:14 crc kubenswrapper[4760]: I0123 18:50:14.030232 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxlp6" event={"ID":"65511cbd-4ce0-4d8d-993e-71a1a355ee17","Type":"ContainerStarted","Data":"aa20cedd87f7f9014fd4f4a95cddb5aa6f4f4c4482e726877298d5050952c24f"} Jan 23 18:50:15 crc kubenswrapper[4760]: I0123 18:50:15.045771 4760 generic.go:334] "Generic (PLEG): container finished" podID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerID="69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f" exitCode=0 Jan 23 18:50:15 crc kubenswrapper[4760]: I0123 18:50:15.046087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxlp6" event={"ID":"65511cbd-4ce0-4d8d-993e-71a1a355ee17","Type":"ContainerDied","Data":"69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f"} Jan 23 18:50:15 crc kubenswrapper[4760]: I0123 18:50:15.049304 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:50:16 crc kubenswrapper[4760]: I0123 18:50:16.075817 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:50:16 crc kubenswrapper[4760]: I0123 18:50:16.076135 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:50:18 crc kubenswrapper[4760]: I0123 18:50:18.368378 4760 generic.go:334] "Generic (PLEG): container finished" podID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerID="5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69" exitCode=0 Jan 23 18:50:18 crc kubenswrapper[4760]: I0123 18:50:18.368535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxlp6" event={"ID":"65511cbd-4ce0-4d8d-993e-71a1a355ee17","Type":"ContainerDied","Data":"5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69"} Jan 23 18:50:19 crc kubenswrapper[4760]: I0123 18:50:19.380941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxlp6" event={"ID":"65511cbd-4ce0-4d8d-993e-71a1a355ee17","Type":"ContainerStarted","Data":"6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043"} Jan 23 18:50:19 crc kubenswrapper[4760]: I0123 18:50:19.406686 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jxlp6" podStartSLOduration=2.546328106 podStartE2EDuration="6.406662467s" podCreationTimestamp="2026-01-23 18:50:13 +0000 UTC" firstStartedPulling="2026-01-23 18:50:15.048759151 +0000 UTC m=+2958.051217104" lastFinishedPulling="2026-01-23 18:50:18.909093532 +0000 UTC m=+2961.911551465" observedRunningTime="2026-01-23 18:50:19.400150297 +0000 UTC m=+2962.402608250" watchObservedRunningTime="2026-01-23 18:50:19.406662467 +0000 UTC m=+2962.409120400" Jan 23 18:50:23 crc kubenswrapper[4760]: I0123 18:50:23.381612 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:23 crc kubenswrapper[4760]: I0123 18:50:23.381959 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:23 crc kubenswrapper[4760]: I0123 18:50:23.424795 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:33 crc kubenswrapper[4760]: I0123 18:50:33.437949 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:33 crc kubenswrapper[4760]: I0123 18:50:33.488961 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jxlp6"] Jan 23 18:50:33 crc kubenswrapper[4760]: I0123 18:50:33.491646 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jxlp6" podUID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerName="registry-server" containerID="cri-o://6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043" gracePeriod=2 Jan 23 18:50:33 crc kubenswrapper[4760]: I0123 18:50:33.926694 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:33 crc kubenswrapper[4760]: I0123 18:50:33.981477 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-catalog-content\") pod \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " Jan 23 18:50:33 crc kubenswrapper[4760]: I0123 18:50:33.981948 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-utilities\") pod \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " Jan 23 18:50:33 crc kubenswrapper[4760]: I0123 18:50:33.982034 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t85bq\" (UniqueName: \"kubernetes.io/projected/65511cbd-4ce0-4d8d-993e-71a1a355ee17-kube-api-access-t85bq\") pod \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\" (UID: \"65511cbd-4ce0-4d8d-993e-71a1a355ee17\") " Jan 23 18:50:33 crc kubenswrapper[4760]: I0123 18:50:33.984740 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-utilities" (OuterVolumeSpecName: "utilities") pod "65511cbd-4ce0-4d8d-993e-71a1a355ee17" (UID: "65511cbd-4ce0-4d8d-993e-71a1a355ee17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:50:33 crc kubenswrapper[4760]: I0123 18:50:33.989644 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65511cbd-4ce0-4d8d-993e-71a1a355ee17-kube-api-access-t85bq" (OuterVolumeSpecName: "kube-api-access-t85bq") pod "65511cbd-4ce0-4d8d-993e-71a1a355ee17" (UID: "65511cbd-4ce0-4d8d-993e-71a1a355ee17"). InnerVolumeSpecName "kube-api-access-t85bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.045754 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65511cbd-4ce0-4d8d-993e-71a1a355ee17" (UID: "65511cbd-4ce0-4d8d-993e-71a1a355ee17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.085193 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.085244 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t85bq\" (UniqueName: \"kubernetes.io/projected/65511cbd-4ce0-4d8d-993e-71a1a355ee17-kube-api-access-t85bq\") on node \"crc\" DevicePath \"\"" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.085258 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65511cbd-4ce0-4d8d-993e-71a1a355ee17-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.501643 4760 generic.go:334] "Generic (PLEG): container finished" podID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerID="6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043" exitCode=0 Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.501694 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxlp6" event={"ID":"65511cbd-4ce0-4d8d-993e-71a1a355ee17","Type":"ContainerDied","Data":"6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043"} Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.501714 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxlp6" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.501732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxlp6" event={"ID":"65511cbd-4ce0-4d8d-993e-71a1a355ee17","Type":"ContainerDied","Data":"aa20cedd87f7f9014fd4f4a95cddb5aa6f4f4c4482e726877298d5050952c24f"} Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.501751 4760 scope.go:117] "RemoveContainer" containerID="6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.519147 4760 scope.go:117] "RemoveContainer" containerID="5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.542921 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jxlp6"] Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.550572 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jxlp6"] Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.557139 4760 scope.go:117] "RemoveContainer" containerID="69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.578287 4760 scope.go:117] "RemoveContainer" containerID="6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043" Jan 23 18:50:34 crc kubenswrapper[4760]: E0123 18:50:34.578783 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043\": container with ID starting with 6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043 not found: ID does not exist" containerID="6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.578835 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043"} err="failed to get container status \"6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043\": rpc error: code = NotFound desc = could not find container \"6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043\": container with ID starting with 6229721f665061d2235ff6e9596d726a31418dfa7f80e7cb1ee9869bb06b1043 not found: ID does not exist" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.578865 4760 scope.go:117] "RemoveContainer" containerID="5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69" Jan 23 18:50:34 crc kubenswrapper[4760]: E0123 18:50:34.579275 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69\": container with ID starting with 5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69 not found: ID does not exist" containerID="5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.579310 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69"} err="failed to get container status \"5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69\": rpc error: code = NotFound desc = could not find container \"5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69\": container with ID starting with 5564f94111bb5d4837025b3ff786149ab735b0392edcd1b12928d8124b3ecc69 not found: ID does not exist" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.579334 4760 scope.go:117] "RemoveContainer" containerID="69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f" Jan 23 18:50:34 crc kubenswrapper[4760]: E0123 18:50:34.579575 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f\": container with ID starting with 69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f not found: ID does not exist" containerID="69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f" Jan 23 18:50:34 crc kubenswrapper[4760]: I0123 18:50:34.579598 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f"} err="failed to get container status \"69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f\": rpc error: code = NotFound desc = could not find container \"69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f\": container with ID starting with 69516f5f36cfd2672cc47645f5da6cb814e97d8e86e926d41baaef4b61a4be6f not found: ID does not exist" Jan 23 18:50:35 crc kubenswrapper[4760]: I0123 18:50:35.607030 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" path="/var/lib/kubelet/pods/65511cbd-4ce0-4d8d-993e-71a1a355ee17/volumes" Jan 23 18:50:46 crc kubenswrapper[4760]: I0123 18:50:46.075203 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:50:46 crc kubenswrapper[4760]: I0123 18:50:46.076821 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.075366 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.076048 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.076102 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.076920 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.076985 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" gracePeriod=600 Jan 23 18:51:16 crc kubenswrapper[4760]: E0123 18:51:16.214464 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.880848 4760 generic.go:334] "Generic (PLEG): container finished" podID="538bf016-5ed3-44cd-bcf0-f59c56e01048" containerID="ed88b57a7e276e3e6697bd2d41079f6270d2e205aef45e9584ee02654c58d429" exitCode=0 Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.881627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" event={"ID":"538bf016-5ed3-44cd-bcf0-f59c56e01048","Type":"ContainerDied","Data":"ed88b57a7e276e3e6697bd2d41079f6270d2e205aef45e9584ee02654c58d429"} Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.886087 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" exitCode=0 Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.886185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af"} Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.886244 4760 scope.go:117] "RemoveContainer" containerID="ab4b062cff7a4b36755049ea2e39f10cfe7fcda0eec6049c0cc5195c662200f0" Jan 23 18:51:16 crc kubenswrapper[4760]: I0123 18:51:16.887345 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:51:16 crc kubenswrapper[4760]: E0123 18:51:16.887962 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.296202 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.400723 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sngqb\" (UniqueName: \"kubernetes.io/projected/538bf016-5ed3-44cd-bcf0-f59c56e01048-kube-api-access-sngqb\") pod \"538bf016-5ed3-44cd-bcf0-f59c56e01048\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.400813 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ceph\") pod \"538bf016-5ed3-44cd-bcf0-f59c56e01048\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.400905 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ssh-key-openstack-edpm-ipam\") pod \"538bf016-5ed3-44cd-bcf0-f59c56e01048\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.400949 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-secret-0\") pod \"538bf016-5ed3-44cd-bcf0-f59c56e01048\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.400980 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-inventory\") pod \"538bf016-5ed3-44cd-bcf0-f59c56e01048\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.401152 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-combined-ca-bundle\") pod \"538bf016-5ed3-44cd-bcf0-f59c56e01048\" (UID: \"538bf016-5ed3-44cd-bcf0-f59c56e01048\") " Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.407151 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ceph" (OuterVolumeSpecName: "ceph") pod "538bf016-5ed3-44cd-bcf0-f59c56e01048" (UID: "538bf016-5ed3-44cd-bcf0-f59c56e01048"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.408676 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/538bf016-5ed3-44cd-bcf0-f59c56e01048-kube-api-access-sngqb" (OuterVolumeSpecName: "kube-api-access-sngqb") pod "538bf016-5ed3-44cd-bcf0-f59c56e01048" (UID: "538bf016-5ed3-44cd-bcf0-f59c56e01048"). InnerVolumeSpecName "kube-api-access-sngqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.412564 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "538bf016-5ed3-44cd-bcf0-f59c56e01048" (UID: "538bf016-5ed3-44cd-bcf0-f59c56e01048"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.427444 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "538bf016-5ed3-44cd-bcf0-f59c56e01048" (UID: "538bf016-5ed3-44cd-bcf0-f59c56e01048"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.428033 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "538bf016-5ed3-44cd-bcf0-f59c56e01048" (UID: "538bf016-5ed3-44cd-bcf0-f59c56e01048"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.438115 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-inventory" (OuterVolumeSpecName: "inventory") pod "538bf016-5ed3-44cd-bcf0-f59c56e01048" (UID: "538bf016-5ed3-44cd-bcf0-f59c56e01048"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.503147 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.503188 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.503198 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.503207 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.503216 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sngqb\" (UniqueName: \"kubernetes.io/projected/538bf016-5ed3-44cd-bcf0-f59c56e01048-kube-api-access-sngqb\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.503225 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/538bf016-5ed3-44cd-bcf0-f59c56e01048-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.910889 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" event={"ID":"538bf016-5ed3-44cd-bcf0-f59c56e01048","Type":"ContainerDied","Data":"e3bb3c6d0aa53ac6a4af4cbd45a846a3fbe397ec91f3b360f58c8da7fc2f9a78"} Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.910936 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3bb3c6d0aa53ac6a4af4cbd45a846a3fbe397ec91f3b360f58c8da7fc2f9a78" Jan 23 18:51:18 crc kubenswrapper[4760]: I0123 18:51:18.911014 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-md5ql" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.018466 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44"] Jan 23 18:51:19 crc kubenswrapper[4760]: E0123 18:51:19.018841 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538bf016-5ed3-44cd-bcf0-f59c56e01048" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.018858 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="538bf016-5ed3-44cd-bcf0-f59c56e01048" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 18:51:19 crc kubenswrapper[4760]: E0123 18:51:19.018884 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerName="extract-content" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.018891 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerName="extract-content" Jan 23 18:51:19 crc kubenswrapper[4760]: E0123 18:51:19.018906 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerName="extract-utilities" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.018912 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerName="extract-utilities" Jan 23 18:51:19 crc kubenswrapper[4760]: E0123 18:51:19.018929 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerName="registry-server" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.018935 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerName="registry-server" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.019085 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="538bf016-5ed3-44cd-bcf0-f59c56e01048" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.019111 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="65511cbd-4ce0-4d8d-993e-71a1a355ee17" containerName="registry-server" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.019833 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.023171 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.023180 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.023857 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.023875 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.023920 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpn6j" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.024119 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.024470 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44"] Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.027455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.027705 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.028450 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.112820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.112870 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.112900 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.113059 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.113095 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.113120 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.113213 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.113249 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.113277 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.113345 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cgl2\" (UniqueName: \"kubernetes.io/projected/a7f5467d-783f-4be7-a149-ea8b97bcf468-kube-api-access-6cgl2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.113381 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.214333 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.214401 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.214480 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.214512 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.214693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.214720 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.214748 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.215119 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.215151 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.215188 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.215301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cgl2\" (UniqueName: \"kubernetes.io/projected/a7f5467d-783f-4be7-a149-ea8b97bcf468-kube-api-access-6cgl2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.216027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.216283 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.218402 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.218621 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.218750 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.219164 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.219815 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.220215 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.220566 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.220950 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.234509 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cgl2\" (UniqueName: \"kubernetes.io/projected/a7f5467d-783f-4be7-a149-ea8b97bcf468-kube-api-access-6cgl2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.340906 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:51:19 crc kubenswrapper[4760]: I0123 18:51:19.927567 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44"] Jan 23 18:51:20 crc kubenswrapper[4760]: I0123 18:51:20.930261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" event={"ID":"a7f5467d-783f-4be7-a149-ea8b97bcf468","Type":"ContainerStarted","Data":"80cbe6d31c973a04c917afdd2de66c2a1a2555a5450d43dbad36cfb429de4990"} Jan 23 18:51:20 crc kubenswrapper[4760]: I0123 18:51:20.930829 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" event={"ID":"a7f5467d-783f-4be7-a149-ea8b97bcf468","Type":"ContainerStarted","Data":"f448d32b050c1c4aa1910081a89106c0c9987b6fd532d59485c37878c85fae36"} Jan 23 18:51:20 crc kubenswrapper[4760]: I0123 18:51:20.947881 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" podStartSLOduration=2.499985319 podStartE2EDuration="2.947857874s" podCreationTimestamp="2026-01-23 18:51:18 +0000 UTC" firstStartedPulling="2026-01-23 18:51:19.926630537 +0000 UTC m=+3022.929088470" lastFinishedPulling="2026-01-23 18:51:20.374503082 +0000 UTC m=+3023.376961025" observedRunningTime="2026-01-23 18:51:20.94664554 +0000 UTC m=+3023.949103473" watchObservedRunningTime="2026-01-23 18:51:20.947857874 +0000 UTC m=+3023.950315807" Jan 23 18:51:29 crc kubenswrapper[4760]: I0123 18:51:29.596273 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:51:29 crc kubenswrapper[4760]: E0123 18:51:29.597103 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:51:40 crc kubenswrapper[4760]: I0123 18:51:40.595510 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:51:40 crc kubenswrapper[4760]: E0123 18:51:40.596323 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:51:55 crc kubenswrapper[4760]: I0123 18:51:55.595656 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:51:55 crc kubenswrapper[4760]: E0123 18:51:55.600885 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:52:08 crc kubenswrapper[4760]: I0123 18:52:08.595612 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:52:08 crc kubenswrapper[4760]: E0123 18:52:08.596301 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:52:19 crc kubenswrapper[4760]: I0123 18:52:19.595312 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:52:19 crc kubenswrapper[4760]: E0123 18:52:19.596057 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.446129 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lvwst"] Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.450272 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.459808 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvwst"] Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.503513 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsjrc\" (UniqueName: \"kubernetes.io/projected/b850031f-55b5-455d-8136-9d519a475db0-kube-api-access-wsjrc\") pod \"community-operators-lvwst\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.503754 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-utilities\") pod \"community-operators-lvwst\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.503849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-catalog-content\") pod \"community-operators-lvwst\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.606330 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsjrc\" (UniqueName: \"kubernetes.io/projected/b850031f-55b5-455d-8136-9d519a475db0-kube-api-access-wsjrc\") pod \"community-operators-lvwst\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.606568 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-utilities\") pod \"community-operators-lvwst\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.606635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-catalog-content\") pod \"community-operators-lvwst\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.607286 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-utilities\") pod \"community-operators-lvwst\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.608834 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-catalog-content\") pod \"community-operators-lvwst\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.635464 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsjrc\" (UniqueName: \"kubernetes.io/projected/b850031f-55b5-455d-8136-9d519a475db0-kube-api-access-wsjrc\") pod \"community-operators-lvwst\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:24 crc kubenswrapper[4760]: I0123 18:52:24.773571 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:25 crc kubenswrapper[4760]: I0123 18:52:25.318537 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvwst"] Jan 23 18:52:25 crc kubenswrapper[4760]: I0123 18:52:25.474203 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvwst" event={"ID":"b850031f-55b5-455d-8136-9d519a475db0","Type":"ContainerStarted","Data":"e8c84d1af45bd72caa1cef41a411260b4e064b1fecaac1d004d68bda6e1e91cc"} Jan 23 18:52:25 crc kubenswrapper[4760]: E0123 18:52:25.920139 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb850031f_55b5_455d_8136_9d519a475db0.slice/crio-conmon-8a32492a53b07da4f3f76ab31fc7b90dc00e61662d894ec1a39e2df0374bf3e3.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:52:26 crc kubenswrapper[4760]: I0123 18:52:26.484470 4760 generic.go:334] "Generic (PLEG): container finished" podID="b850031f-55b5-455d-8136-9d519a475db0" containerID="8a32492a53b07da4f3f76ab31fc7b90dc00e61662d894ec1a39e2df0374bf3e3" exitCode=0 Jan 23 18:52:26 crc kubenswrapper[4760]: I0123 18:52:26.484561 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvwst" event={"ID":"b850031f-55b5-455d-8136-9d519a475db0","Type":"ContainerDied","Data":"8a32492a53b07da4f3f76ab31fc7b90dc00e61662d894ec1a39e2df0374bf3e3"} Jan 23 18:52:29 crc kubenswrapper[4760]: I0123 18:52:29.520022 4760 generic.go:334] "Generic (PLEG): container finished" podID="b850031f-55b5-455d-8136-9d519a475db0" containerID="f92fef9d379e7762dbaf6c4af6ed9c49ce235cee58f29b5618b1ad1e2bfe2e1f" exitCode=0 Jan 23 18:52:29 crc kubenswrapper[4760]: I0123 18:52:29.520100 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvwst" event={"ID":"b850031f-55b5-455d-8136-9d519a475db0","Type":"ContainerDied","Data":"f92fef9d379e7762dbaf6c4af6ed9c49ce235cee58f29b5618b1ad1e2bfe2e1f"} Jan 23 18:52:31 crc kubenswrapper[4760]: I0123 18:52:31.535434 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvwst" event={"ID":"b850031f-55b5-455d-8136-9d519a475db0","Type":"ContainerStarted","Data":"9a7918ad232539a612f213e637ed3b2a764fa51435bc3b4ef3ae1588f5e978c3"} Jan 23 18:52:31 crc kubenswrapper[4760]: I0123 18:52:31.553655 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lvwst" podStartSLOduration=3.788903759 podStartE2EDuration="7.553635937s" podCreationTimestamp="2026-01-23 18:52:24 +0000 UTC" firstStartedPulling="2026-01-23 18:52:26.48654895 +0000 UTC m=+3089.489006893" lastFinishedPulling="2026-01-23 18:52:30.251281138 +0000 UTC m=+3093.253739071" observedRunningTime="2026-01-23 18:52:31.552583358 +0000 UTC m=+3094.555041311" watchObservedRunningTime="2026-01-23 18:52:31.553635937 +0000 UTC m=+3094.556093870" Jan 23 18:52:31 crc kubenswrapper[4760]: I0123 18:52:31.595139 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:52:31 crc kubenswrapper[4760]: E0123 18:52:31.595381 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:52:34 crc kubenswrapper[4760]: I0123 18:52:34.775038 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:34 crc kubenswrapper[4760]: I0123 18:52:34.775743 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:34 crc kubenswrapper[4760]: I0123 18:52:34.832176 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:35 crc kubenswrapper[4760]: I0123 18:52:35.620274 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:35 crc kubenswrapper[4760]: I0123 18:52:35.671916 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvwst"] Jan 23 18:52:37 crc kubenswrapper[4760]: I0123 18:52:37.587789 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lvwst" podUID="b850031f-55b5-455d-8136-9d519a475db0" containerName="registry-server" containerID="cri-o://9a7918ad232539a612f213e637ed3b2a764fa51435bc3b4ef3ae1588f5e978c3" gracePeriod=2 Jan 23 18:52:41 crc kubenswrapper[4760]: I0123 18:52:41.658097 4760 generic.go:334] "Generic (PLEG): container finished" podID="b850031f-55b5-455d-8136-9d519a475db0" containerID="9a7918ad232539a612f213e637ed3b2a764fa51435bc3b4ef3ae1588f5e978c3" exitCode=0 Jan 23 18:52:41 crc kubenswrapper[4760]: I0123 18:52:41.658174 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvwst" event={"ID":"b850031f-55b5-455d-8136-9d519a475db0","Type":"ContainerDied","Data":"9a7918ad232539a612f213e637ed3b2a764fa51435bc3b4ef3ae1588f5e978c3"} Jan 23 18:52:41 crc kubenswrapper[4760]: I0123 18:52:41.776499 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:41 crc kubenswrapper[4760]: I0123 18:52:41.928256 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsjrc\" (UniqueName: \"kubernetes.io/projected/b850031f-55b5-455d-8136-9d519a475db0-kube-api-access-wsjrc\") pod \"b850031f-55b5-455d-8136-9d519a475db0\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " Jan 23 18:52:41 crc kubenswrapper[4760]: I0123 18:52:41.928508 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-utilities\") pod \"b850031f-55b5-455d-8136-9d519a475db0\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " Jan 23 18:52:41 crc kubenswrapper[4760]: I0123 18:52:41.928710 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-catalog-content\") pod \"b850031f-55b5-455d-8136-9d519a475db0\" (UID: \"b850031f-55b5-455d-8136-9d519a475db0\") " Jan 23 18:52:41 crc kubenswrapper[4760]: I0123 18:52:41.929382 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-utilities" (OuterVolumeSpecName: "utilities") pod "b850031f-55b5-455d-8136-9d519a475db0" (UID: "b850031f-55b5-455d-8136-9d519a475db0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:52:41 crc kubenswrapper[4760]: I0123 18:52:41.929744 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:52:41 crc kubenswrapper[4760]: I0123 18:52:41.935710 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b850031f-55b5-455d-8136-9d519a475db0-kube-api-access-wsjrc" (OuterVolumeSpecName: "kube-api-access-wsjrc") pod "b850031f-55b5-455d-8136-9d519a475db0" (UID: "b850031f-55b5-455d-8136-9d519a475db0"). InnerVolumeSpecName "kube-api-access-wsjrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:52:41 crc kubenswrapper[4760]: I0123 18:52:41.982728 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b850031f-55b5-455d-8136-9d519a475db0" (UID: "b850031f-55b5-455d-8136-9d519a475db0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:52:42 crc kubenswrapper[4760]: I0123 18:52:42.032150 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b850031f-55b5-455d-8136-9d519a475db0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:52:42 crc kubenswrapper[4760]: I0123 18:52:42.032435 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsjrc\" (UniqueName: \"kubernetes.io/projected/b850031f-55b5-455d-8136-9d519a475db0-kube-api-access-wsjrc\") on node \"crc\" DevicePath \"\"" Jan 23 18:52:42 crc kubenswrapper[4760]: I0123 18:52:42.670290 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvwst" event={"ID":"b850031f-55b5-455d-8136-9d519a475db0","Type":"ContainerDied","Data":"e8c84d1af45bd72caa1cef41a411260b4e064b1fecaac1d004d68bda6e1e91cc"} Jan 23 18:52:42 crc kubenswrapper[4760]: I0123 18:52:42.670392 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvwst" Jan 23 18:52:42 crc kubenswrapper[4760]: I0123 18:52:42.670601 4760 scope.go:117] "RemoveContainer" containerID="9a7918ad232539a612f213e637ed3b2a764fa51435bc3b4ef3ae1588f5e978c3" Jan 23 18:52:42 crc kubenswrapper[4760]: I0123 18:52:42.693989 4760 scope.go:117] "RemoveContainer" containerID="f92fef9d379e7762dbaf6c4af6ed9c49ce235cee58f29b5618b1ad1e2bfe2e1f" Jan 23 18:52:42 crc kubenswrapper[4760]: I0123 18:52:42.727566 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvwst"] Jan 23 18:52:42 crc kubenswrapper[4760]: I0123 18:52:42.736691 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lvwst"] Jan 23 18:52:42 crc kubenswrapper[4760]: I0123 18:52:42.742829 4760 scope.go:117] "RemoveContainer" containerID="8a32492a53b07da4f3f76ab31fc7b90dc00e61662d894ec1a39e2df0374bf3e3" Jan 23 18:52:43 crc kubenswrapper[4760]: I0123 18:52:43.597599 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:52:43 crc kubenswrapper[4760]: E0123 18:52:43.598552 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:52:43 crc kubenswrapper[4760]: I0123 18:52:43.609949 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b850031f-55b5-455d-8136-9d519a475db0" path="/var/lib/kubelet/pods/b850031f-55b5-455d-8136-9d519a475db0/volumes" Jan 23 18:52:56 crc kubenswrapper[4760]: I0123 18:52:56.597129 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:52:56 crc kubenswrapper[4760]: E0123 18:52:56.601125 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:53:07 crc kubenswrapper[4760]: I0123 18:53:07.600299 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:53:07 crc kubenswrapper[4760]: E0123 18:53:07.601064 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:53:21 crc kubenswrapper[4760]: I0123 18:53:21.597015 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:53:21 crc kubenswrapper[4760]: E0123 18:53:21.597828 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:53:35 crc kubenswrapper[4760]: I0123 18:53:35.595847 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:53:35 crc kubenswrapper[4760]: E0123 18:53:35.596552 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:53:49 crc kubenswrapper[4760]: I0123 18:53:49.595887 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:53:49 crc kubenswrapper[4760]: E0123 18:53:49.596596 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:53:51 crc kubenswrapper[4760]: I0123 18:53:51.244817 4760 generic.go:334] "Generic (PLEG): container finished" podID="a7f5467d-783f-4be7-a149-ea8b97bcf468" containerID="80cbe6d31c973a04c917afdd2de66c2a1a2555a5450d43dbad36cfb429de4990" exitCode=0 Jan 23 18:53:51 crc kubenswrapper[4760]: I0123 18:53:51.244988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" event={"ID":"a7f5467d-783f-4be7-a149-ea8b97bcf468","Type":"ContainerDied","Data":"80cbe6d31c973a04c917afdd2de66c2a1a2555a5450d43dbad36cfb429de4990"} Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.661695 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.726766 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-1\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.726853 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-extra-config-0\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.726906 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-0\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.726931 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cgl2\" (UniqueName: \"kubernetes.io/projected/a7f5467d-783f-4be7-a149-ea8b97bcf468-kube-api-access-6cgl2\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.726955 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.727031 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-custom-ceph-combined-ca-bundle\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.727070 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ssh-key-openstack-edpm-ipam\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.727103 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-inventory\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.727170 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-1\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.727240 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph-nova-0\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.727263 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-0\") pod \"a7f5467d-783f-4be7-a149-ea8b97bcf468\" (UID: \"a7f5467d-783f-4be7-a149-ea8b97bcf468\") " Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.743544 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f5467d-783f-4be7-a149-ea8b97bcf468-kube-api-access-6cgl2" (OuterVolumeSpecName: "kube-api-access-6cgl2") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "kube-api-access-6cgl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.743892 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph" (OuterVolumeSpecName: "ceph") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.746613 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.764272 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.788812 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-inventory" (OuterVolumeSpecName: "inventory") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.793701 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.800718 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.811441 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.815116 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.815646 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.821036 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "a7f5467d-783f-4be7-a149-ea8b97bcf468" (UID: "a7f5467d-783f-4be7-a149-ea8b97bcf468"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830134 4760 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830183 4760 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830196 4760 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830208 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cgl2\" (UniqueName: \"kubernetes.io/projected/a7f5467d-783f-4be7-a149-ea8b97bcf468-kube-api-access-6cgl2\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830221 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830236 4760 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830250 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830262 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830273 4760 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830286 4760 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/a7f5467d-783f-4be7-a149-ea8b97bcf468-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4760]: I0123 18:53:52.830297 4760 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a7f5467d-783f-4be7-a149-ea8b97bcf468-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:53 crc kubenswrapper[4760]: I0123 18:53:53.263116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" event={"ID":"a7f5467d-783f-4be7-a149-ea8b97bcf468","Type":"ContainerDied","Data":"f448d32b050c1c4aa1910081a89106c0c9987b6fd532d59485c37878c85fae36"} Jan 23 18:53:53 crc kubenswrapper[4760]: I0123 18:53:53.263157 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44" Jan 23 18:53:53 crc kubenswrapper[4760]: I0123 18:53:53.263178 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f448d32b050c1c4aa1910081a89106c0c9987b6fd532d59485c37878c85fae36" Jan 23 18:54:02 crc kubenswrapper[4760]: I0123 18:54:02.597025 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:54:02 crc kubenswrapper[4760]: E0123 18:54:02.598832 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.937398 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 23 18:54:08 crc kubenswrapper[4760]: E0123 18:54:08.939809 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b850031f-55b5-455d-8136-9d519a475db0" containerName="registry-server" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.939830 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b850031f-55b5-455d-8136-9d519a475db0" containerName="registry-server" Jan 23 18:54:08 crc kubenswrapper[4760]: E0123 18:54:08.939863 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b850031f-55b5-455d-8136-9d519a475db0" containerName="extract-utilities" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.939873 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b850031f-55b5-455d-8136-9d519a475db0" containerName="extract-utilities" Jan 23 18:54:08 crc kubenswrapper[4760]: E0123 18:54:08.939887 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f5467d-783f-4be7-a149-ea8b97bcf468" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.939895 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f5467d-783f-4be7-a149-ea8b97bcf468" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 23 18:54:08 crc kubenswrapper[4760]: E0123 18:54:08.939916 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b850031f-55b5-455d-8136-9d519a475db0" containerName="extract-content" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.939922 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b850031f-55b5-455d-8136-9d519a475db0" containerName="extract-content" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.940263 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b850031f-55b5-455d-8136-9d519a475db0" containerName="registry-server" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.940288 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f5467d-783f-4be7-a149-ea8b97bcf468" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.941855 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.949321 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.949350 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 23 18:54:08 crc kubenswrapper[4760]: I0123 18:54:08.971823 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.037487 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.039357 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.041738 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.043566 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061085 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061132 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061159 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061183 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061206 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061246 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhb5v\" (UniqueName: \"kubernetes.io/projected/dd897463-2b70-4dcd-9a51-442771b77ff4-kube-api-access-rhb5v\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061274 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061347 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061512 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-sys\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061549 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dd897463-2b70-4dcd-9a51-442771b77ff4-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061575 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061597 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061761 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-dev\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061829 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.061847 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-run\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163666 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-etc-nvme\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163740 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163756 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163773 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhb5v\" (UniqueName: \"kubernetes.io/projected/dd897463-2b70-4dcd-9a51-442771b77ff4-kube-api-access-rhb5v\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163791 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163812 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163830 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163856 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163880 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-config-data-custom\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-sys\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163918 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163933 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dd897463-2b70-4dcd-9a51-442771b77ff4-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163949 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163964 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-sys\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.163980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-dev\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164024 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-config-data\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164044 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164069 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-run\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164103 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164118 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-lib-modules\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164120 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164168 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-dev\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164133 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-dev\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164211 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/34fe3286-04be-40d9-a398-86c54b9025f1-ceph\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164237 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164238 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164268 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-run\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164294 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-sys\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164265 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164363 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164434 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164252 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-run\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164589 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-scripts\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxw8t\" (UniqueName: \"kubernetes.io/projected/34fe3286-04be-40d9-a398-86c54b9025f1-kube-api-access-cxw8t\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164691 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164783 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.164828 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.165062 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.165132 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dd897463-2b70-4dcd-9a51-442771b77ff4-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.170677 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dd897463-2b70-4dcd-9a51-442771b77ff4-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.171088 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.171253 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.171688 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.172026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd897463-2b70-4dcd-9a51-442771b77ff4-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.180843 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhb5v\" (UniqueName: \"kubernetes.io/projected/dd897463-2b70-4dcd-9a51-442771b77ff4-kube-api-access-rhb5v\") pod \"cinder-volume-volume1-0\" (UID: \"dd897463-2b70-4dcd-9a51-442771b77ff4\") " pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.266852 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-scripts\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.267489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxw8t\" (UniqueName: \"kubernetes.io/projected/34fe3286-04be-40d9-a398-86c54b9025f1-kube-api-access-cxw8t\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.268091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.268615 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-etc-nvme\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.268811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.268915 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.269028 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-config-data-custom\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.269118 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.269218 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-sys\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.269354 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-dev\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.269633 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-config-data\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.270094 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.270191 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-run\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.270265 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.268767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-etc-nvme\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.268220 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.269526 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.269807 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-sys\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.270343 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-lib-modules\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.270590 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.269812 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.270553 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-run\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.269810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-dev\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.270669 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-lib-modules\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.270802 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/34fe3286-04be-40d9-a398-86c54b9025f1-ceph\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.270524 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/34fe3286-04be-40d9-a398-86c54b9025f1-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.272242 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-scripts\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.274366 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.275601 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-config-data-custom\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.280016 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/34fe3286-04be-40d9-a398-86c54b9025f1-ceph\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.282005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.283474 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fe3286-04be-40d9-a398-86c54b9025f1-config-data\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.289271 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxw8t\" (UniqueName: \"kubernetes.io/projected/34fe3286-04be-40d9-a398-86c54b9025f1-kube-api-access-cxw8t\") pod \"cinder-backup-0\" (UID: \"34fe3286-04be-40d9-a398-86c54b9025f1\") " pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.352801 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.623734 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-thpxd"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.625594 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-thpxd" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.635280 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-thpxd"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.730023 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-aa79-account-create-update-lsdwl"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.731338 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-aa79-account-create-update-lsdwl" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.740818 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.748749 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-aa79-account-create-update-lsdwl"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.781765 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trnpx\" (UniqueName: \"kubernetes.io/projected/54f83e20-da38-4866-860b-a54d1f424fbd-kube-api-access-trnpx\") pod \"manila-db-create-thpxd\" (UID: \"54f83e20-da38-4866-860b-a54d1f424fbd\") " pod="openstack/manila-db-create-thpxd" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.781855 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54f83e20-da38-4866-860b-a54d1f424fbd-operator-scripts\") pod \"manila-db-create-thpxd\" (UID: \"54f83e20-da38-4866-860b-a54d1f424fbd\") " pod="openstack/manila-db-create-thpxd" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.799361 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.809279 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.812778 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.814041 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.814670 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.814840 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ttxth" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.824850 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f66bb87ff-nxjgw"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.827220 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.837283 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.837571 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-cstpm" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.837633 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.837710 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.847023 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.871601 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f66bb87ff-nxjgw"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.882957 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.883180 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trnpx\" (UniqueName: \"kubernetes.io/projected/54f83e20-da38-4866-860b-a54d1f424fbd-kube-api-access-trnpx\") pod \"manila-db-create-thpxd\" (UID: \"54f83e20-da38-4866-860b-a54d1f424fbd\") " pod="openstack/manila-db-create-thpxd" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.883274 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-scripts\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.883343 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-ceph\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.883477 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-config-data\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.883598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g8r6\" (UniqueName: \"kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-kube-api-access-7g8r6\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.883674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-operator-scripts\") pod \"manila-aa79-account-create-update-lsdwl\" (UID: \"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14\") " pod="openstack/manila-aa79-account-create-update-lsdwl" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.883777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.883870 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54f83e20-da38-4866-860b-a54d1f424fbd-operator-scripts\") pod \"manila-db-create-thpxd\" (UID: \"54f83e20-da38-4866-860b-a54d1f424fbd\") " pod="openstack/manila-db-create-thpxd" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.883996 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.884098 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ppvm\" (UniqueName: \"kubernetes.io/projected/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-kube-api-access-4ppvm\") pod \"manila-aa79-account-create-update-lsdwl\" (UID: \"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14\") " pod="openstack/manila-aa79-account-create-update-lsdwl" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.884175 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.884240 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-logs\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.885627 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54f83e20-da38-4866-860b-a54d1f424fbd-operator-scripts\") pod \"manila-db-create-thpxd\" (UID: \"54f83e20-da38-4866-860b-a54d1f424fbd\") " pod="openstack/manila-db-create-thpxd" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.917309 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trnpx\" (UniqueName: \"kubernetes.io/projected/54f83e20-da38-4866-860b-a54d1f424fbd-kube-api-access-trnpx\") pod \"manila-db-create-thpxd\" (UID: \"54f83e20-da38-4866-860b-a54d1f424fbd\") " pod="openstack/manila-db-create-thpxd" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.928397 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.930519 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.935392 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.935786 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.953037 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:09 crc kubenswrapper[4760]: E0123 18:54:09.954277 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-7g8r6 logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="20c1b326-b143-48a0-a480-e794dc14e530" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.965179 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-thpxd" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.969298 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986176 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-config-data\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986222 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ppvm\" (UniqueName: \"kubernetes.io/projected/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-kube-api-access-4ppvm\") pod \"manila-aa79-account-create-update-lsdwl\" (UID: \"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14\") " pod="openstack/manila-aa79-account-create-update-lsdwl" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986245 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986260 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-logs\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986285 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986321 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986345 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986372 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/32c783fd-6a4a-4256-91ad-01ecf9276f23-horizon-secret-key\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986387 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-scripts\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986424 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-scripts\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986439 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz9mq\" (UniqueName: \"kubernetes.io/projected/32c783fd-6a4a-4256-91ad-01ecf9276f23-kube-api-access-sz9mq\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-ceph\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986475 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-config-data\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g8r6\" (UniqueName: \"kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-kube-api-access-7g8r6\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986508 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-operator-scripts\") pod \"manila-aa79-account-create-update-lsdwl\" (UID: \"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14\") " pod="openstack/manila-aa79-account-create-update-lsdwl" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986539 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986578 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986592 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986607 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986623 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986641 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-logs\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986698 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986713 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c783fd-6a4a-4256-91ad-01ecf9276f23-logs\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.986729 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlnr8\" (UniqueName: \"kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-kube-api-access-hlnr8\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.989533 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-logs\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.989758 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:09 crc kubenswrapper[4760]: I0123 18:54:09.996745 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-operator-scripts\") pod \"manila-aa79-account-create-update-lsdwl\" (UID: \"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14\") " pod="openstack/manila-aa79-account-create-update-lsdwl" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:09.998915 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bcbfd84fc-95mr6"] Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.000859 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.001907 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.002921 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-ceph\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.003742 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.004564 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.019509 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-scripts\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.020206 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-config-data\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.034000 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bcbfd84fc-95mr6"] Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.052735 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g8r6\" (UniqueName: \"kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-kube-api-access-7g8r6\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.087130 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ppvm\" (UniqueName: \"kubernetes.io/projected/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-kube-api-access-4ppvm\") pod \"manila-aa79-account-create-update-lsdwl\" (UID: \"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14\") " pod="openstack/manila-aa79-account-create-update-lsdwl" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.105681 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.109945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110002 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110026 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110055 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110138 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-logs\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110186 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rvff\" (UniqueName: \"kubernetes.io/projected/d347a83b-4418-4917-9bec-155d15168aca-kube-api-access-2rvff\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110232 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c783fd-6a4a-4256-91ad-01ecf9276f23-logs\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110261 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlnr8\" (UniqueName: \"kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-kube-api-access-hlnr8\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110315 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-config-data\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110355 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d347a83b-4418-4917-9bec-155d15168aca-horizon-secret-key\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110387 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110432 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-config-data\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110519 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110580 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d347a83b-4418-4917-9bec-155d15168aca-logs\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110638 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/32c783fd-6a4a-4256-91ad-01ecf9276f23-horizon-secret-key\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110664 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-scripts\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110720 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz9mq\" (UniqueName: \"kubernetes.io/projected/32c783fd-6a4a-4256-91ad-01ecf9276f23-kube-api-access-sz9mq\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.110825 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-scripts\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.111101 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.112552 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-config-data\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.116098 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.127682 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c783fd-6a4a-4256-91ad-01ecf9276f23-logs\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.128701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.151954 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-scripts\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.152119 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-logs\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.154080 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.157064 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/32c783fd-6a4a-4256-91ad-01ecf9276f23-horizon-secret-key\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.159228 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.190230 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlnr8\" (UniqueName: \"kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-kube-api-access-hlnr8\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.195673 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.213263 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-scripts\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.213362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rvff\" (UniqueName: \"kubernetes.io/projected/d347a83b-4418-4917-9bec-155d15168aca-kube-api-access-2rvff\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.215183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-scripts\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.226673 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz9mq\" (UniqueName: \"kubernetes.io/projected/32c783fd-6a4a-4256-91ad-01ecf9276f23-kube-api-access-sz9mq\") pod \"horizon-f66bb87ff-nxjgw\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.227803 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.229840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d347a83b-4418-4917-9bec-155d15168aca-horizon-secret-key\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.229913 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-config-data\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.230062 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d347a83b-4418-4917-9bec-155d15168aca-logs\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.232091 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d347a83b-4418-4917-9bec-155d15168aca-logs\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.243729 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.253859 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d347a83b-4418-4917-9bec-155d15168aca-horizon-secret-key\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.254150 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-config-data\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.272794 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rvff\" (UniqueName: \"kubernetes.io/projected/d347a83b-4418-4917-9bec-155d15168aca-kube-api-access-2rvff\") pod \"horizon-5bcbfd84fc-95mr6\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.286470 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.317154 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.360549 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-aa79-account-create-update-lsdwl" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.370482 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.467087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"34fe3286-04be-40d9-a398-86c54b9025f1","Type":"ContainerStarted","Data":"99921d76318945fe7edec2aca207d596e0e2f77145026da000a5081244194c1a"} Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.467555 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.469098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.469603 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"dd897463-2b70-4dcd-9a51-442771b77ff4","Type":"ContainerStarted","Data":"77812dc5bdbaf19c7c6896c5ec83c3ecd5e941a8de28226a55faada714df6d25"} Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.583628 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.584116 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.681473 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-thpxd"] Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.686030 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-scripts\") pod \"20c1b326-b143-48a0-a480-e794dc14e530\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.686132 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-combined-ca-bundle\") pod \"20c1b326-b143-48a0-a480-e794dc14e530\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.686165 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-ceph\") pod \"20c1b326-b143-48a0-a480-e794dc14e530\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.686199 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-httpd-run\") pod \"20c1b326-b143-48a0-a480-e794dc14e530\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.686263 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-config-data\") pod \"20c1b326-b143-48a0-a480-e794dc14e530\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.687512 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "20c1b326-b143-48a0-a480-e794dc14e530" (UID: "20c1b326-b143-48a0-a480-e794dc14e530"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.693530 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-scripts" (OuterVolumeSpecName: "scripts") pod "20c1b326-b143-48a0-a480-e794dc14e530" (UID: "20c1b326-b143-48a0-a480-e794dc14e530"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.699193 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-config-data" (OuterVolumeSpecName: "config-data") pod "20c1b326-b143-48a0-a480-e794dc14e530" (UID: "20c1b326-b143-48a0-a480-e794dc14e530"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.699230 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20c1b326-b143-48a0-a480-e794dc14e530" (UID: "20c1b326-b143-48a0-a480-e794dc14e530"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.701192 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-ceph" (OuterVolumeSpecName: "ceph") pod "20c1b326-b143-48a0-a480-e794dc14e530" (UID: "20c1b326-b143-48a0-a480-e794dc14e530"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.791399 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-public-tls-certs\") pod \"20c1b326-b143-48a0-a480-e794dc14e530\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.791484 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"20c1b326-b143-48a0-a480-e794dc14e530\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.791532 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-logs\") pod \"20c1b326-b143-48a0-a480-e794dc14e530\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.791572 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g8r6\" (UniqueName: \"kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-kube-api-access-7g8r6\") pod \"20c1b326-b143-48a0-a480-e794dc14e530\" (UID: \"20c1b326-b143-48a0-a480-e794dc14e530\") " Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.791949 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-logs" (OuterVolumeSpecName: "logs") pod "20c1b326-b143-48a0-a480-e794dc14e530" (UID: "20c1b326-b143-48a0-a480-e794dc14e530"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.792563 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.792584 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.792597 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.792608 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.792620 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.792633 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20c1b326-b143-48a0-a480-e794dc14e530-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.796019 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "20c1b326-b143-48a0-a480-e794dc14e530" (UID: "20c1b326-b143-48a0-a480-e794dc14e530"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.803128 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-kube-api-access-7g8r6" (OuterVolumeSpecName: "kube-api-access-7g8r6") pod "20c1b326-b143-48a0-a480-e794dc14e530" (UID: "20c1b326-b143-48a0-a480-e794dc14e530"). InnerVolumeSpecName "kube-api-access-7g8r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.804748 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "20c1b326-b143-48a0-a480-e794dc14e530" (UID: "20c1b326-b143-48a0-a480-e794dc14e530"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.893498 4760 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20c1b326-b143-48a0-a480-e794dc14e530-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.893908 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.893925 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g8r6\" (UniqueName: \"kubernetes.io/projected/20c1b326-b143-48a0-a480-e794dc14e530-kube-api-access-7g8r6\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.924650 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.962278 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-aa79-account-create-update-lsdwl"] Jan 23 18:54:10 crc kubenswrapper[4760]: I0123 18:54:10.996008 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.081367 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bcbfd84fc-95mr6"] Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.204671 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f66bb87ff-nxjgw"] Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.237773 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.490915 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bcbfd84fc-95mr6" event={"ID":"d347a83b-4418-4917-9bec-155d15168aca","Type":"ContainerStarted","Data":"3e90a66fe948d9e1b4901a3e3ec5d2aeee6eb5c367354708f29de44897573b17"} Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.494750 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0d9361de-8dde-4933-a80e-f8ccae325e7c","Type":"ContainerStarted","Data":"eccafa1d00a5bdf81e897ccd92eebb791ddb289274f03e5cf52b9841a9ed73b6"} Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.497057 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f66bb87ff-nxjgw" event={"ID":"32c783fd-6a4a-4256-91ad-01ecf9276f23","Type":"ContainerStarted","Data":"7f10302888273ffbfbf5a38f05f1d43da319748dbc452aad42ba2cd45982a308"} Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.500272 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-thpxd" event={"ID":"54f83e20-da38-4866-860b-a54d1f424fbd","Type":"ContainerStarted","Data":"29d2cfed54b85d4f8e966c27e5b29e329b9fc2aa69a8b8110c8c87b8e30dd665"} Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.500334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-thpxd" event={"ID":"54f83e20-da38-4866-860b-a54d1f424fbd","Type":"ContainerStarted","Data":"1b293a8f9231bc768e58c1f618dad447eee3b828a78ceae9907b267db9799ddc"} Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.506689 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.511883 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-aa79-account-create-update-lsdwl" event={"ID":"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14","Type":"ContainerStarted","Data":"81cc9abdcb4037186395d162fffe909169ccb3a96d43d46483455e6eed074c76"} Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.512006 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-aa79-account-create-update-lsdwl" event={"ID":"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14","Type":"ContainerStarted","Data":"c3798fd751f5ccd544fbd5c73beabca8d7d4c652574c88c690b402c563812624"} Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.554756 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-aa79-account-create-update-lsdwl" podStartSLOduration=2.554737519 podStartE2EDuration="2.554737519s" podCreationTimestamp="2026-01-23 18:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:54:11.54931844 +0000 UTC m=+3194.551776373" watchObservedRunningTime="2026-01-23 18:54:11.554737519 +0000 UTC m=+3194.557195452" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.642824 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.642892 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.725587 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.727382 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.732814 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.733076 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.738365 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.911850 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.911966 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.912014 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24vc\" (UniqueName: \"kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-kube-api-access-m24vc\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.912045 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.912143 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.912194 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-ceph\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.912246 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-logs\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.912312 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:11 crc kubenswrapper[4760]: I0123 18:54:11.912332 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.013861 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.013924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.013943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m24vc\" (UniqueName: \"kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-kube-api-access-m24vc\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.013963 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.014017 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.014070 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-ceph\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.014098 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-logs\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.014141 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.014158 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.014541 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.014753 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.015266 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-logs\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.020134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.020817 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.021262 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.024160 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.024306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-ceph\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.035632 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m24vc\" (UniqueName: \"kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-kube-api-access-m24vc\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.040440 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.055854 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.412227 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bcbfd84fc-95mr6"] Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.453257 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cd9b568d4-rhb64"] Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.457753 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.459506 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.509954 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cd9b568d4-rhb64"] Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.538553 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.550927 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-scripts\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.551011 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-combined-ca-bundle\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.551100 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rphc\" (UniqueName: \"kubernetes.io/projected/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-kube-api-access-8rphc\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.551131 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-secret-key\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.551228 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-logs\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.551250 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-config-data\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.551269 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-tls-certs\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.556536 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f66bb87ff-nxjgw"] Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.558259 4760 generic.go:334] "Generic (PLEG): container finished" podID="54f83e20-da38-4866-860b-a54d1f424fbd" containerID="29d2cfed54b85d4f8e966c27e5b29e329b9fc2aa69a8b8110c8c87b8e30dd665" exitCode=0 Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.558874 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-thpxd" event={"ID":"54f83e20-da38-4866-860b-a54d1f424fbd","Type":"ContainerDied","Data":"29d2cfed54b85d4f8e966c27e5b29e329b9fc2aa69a8b8110c8c87b8e30dd665"} Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.595465 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-559467fcc6-pxz2z"] Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.597336 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.611142 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.635223 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-559467fcc6-pxz2z"] Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.653143 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-logs\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.653476 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-config-data\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.653743 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-tls-certs\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.653811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-scripts\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.653887 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-combined-ca-bundle\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.653931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-logs\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.654049 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rphc\" (UniqueName: \"kubernetes.io/projected/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-kube-api-access-8rphc\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.654111 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-secret-key\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.655123 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-scripts\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.655828 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-config-data\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.659263 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-tls-certs\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.659584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-secret-key\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.674572 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-combined-ca-bundle\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.677006 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rphc\" (UniqueName: \"kubernetes.io/projected/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-kube-api-access-8rphc\") pod \"horizon-cd9b568d4-rhb64\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.755616 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-logs\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.755739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-combined-ca-bundle\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.755822 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-horizon-secret-key\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.755900 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-config-data\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.755945 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d7zh\" (UniqueName: \"kubernetes.io/projected/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-kube-api-access-6d7zh\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.755972 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-scripts\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.756004 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-horizon-tls-certs\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.821139 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.858522 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-combined-ca-bundle\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.858604 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-horizon-secret-key\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.858655 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-config-data\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.858690 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d7zh\" (UniqueName: \"kubernetes.io/projected/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-kube-api-access-6d7zh\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.858708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-scripts\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.858726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-horizon-tls-certs\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.858743 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-logs\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.859283 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-logs\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.859857 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-scripts\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.862810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-horizon-secret-key\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.864167 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-config-data\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.866074 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-horizon-tls-certs\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.866317 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-combined-ca-bundle\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.884654 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d7zh\" (UniqueName: \"kubernetes.io/projected/fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6-kube-api-access-6d7zh\") pod \"horizon-559467fcc6-pxz2z\" (UID: \"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6\") " pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.927903 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:12 crc kubenswrapper[4760]: I0123 18:54:12.992795 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-thpxd" Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.164351 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trnpx\" (UniqueName: \"kubernetes.io/projected/54f83e20-da38-4866-860b-a54d1f424fbd-kube-api-access-trnpx\") pod \"54f83e20-da38-4866-860b-a54d1f424fbd\" (UID: \"54f83e20-da38-4866-860b-a54d1f424fbd\") " Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.166740 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54f83e20-da38-4866-860b-a54d1f424fbd-operator-scripts\") pod \"54f83e20-da38-4866-860b-a54d1f424fbd\" (UID: \"54f83e20-da38-4866-860b-a54d1f424fbd\") " Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.167550 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f83e20-da38-4866-860b-a54d1f424fbd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54f83e20-da38-4866-860b-a54d1f424fbd" (UID: "54f83e20-da38-4866-860b-a54d1f424fbd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.167949 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54f83e20-da38-4866-860b-a54d1f424fbd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.173562 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f83e20-da38-4866-860b-a54d1f424fbd-kube-api-access-trnpx" (OuterVolumeSpecName: "kube-api-access-trnpx") pod "54f83e20-da38-4866-860b-a54d1f424fbd" (UID: "54f83e20-da38-4866-860b-a54d1f424fbd"). InnerVolumeSpecName "kube-api-access-trnpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.269322 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trnpx\" (UniqueName: \"kubernetes.io/projected/54f83e20-da38-4866-860b-a54d1f424fbd-kube-api-access-trnpx\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:13 crc kubenswrapper[4760]: W0123 18:54:13.477151 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eb3feba_deb6_4186_8cc2_fcfbdedd6154.slice/crio-687361a56673a33557c7dedc77a626ffe4f4d9b5b300abdc94e9b093ed751e15 WatchSource:0}: Error finding container 687361a56673a33557c7dedc77a626ffe4f4d9b5b300abdc94e9b093ed751e15: Status 404 returned error can't find the container with id 687361a56673a33557c7dedc77a626ffe4f4d9b5b300abdc94e9b093ed751e15 Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.479024 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cd9b568d4-rhb64"] Jan 23 18:54:13 crc kubenswrapper[4760]: W0123 18:54:13.491282 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd7757d4_8787_438c_bda4_eb895e83aae7.slice/crio-de9fa50110565b858bf3f19db79c98cb2ea78a54d8265b348668f88ea6217f5a WatchSource:0}: Error finding container de9fa50110565b858bf3f19db79c98cb2ea78a54d8265b348668f88ea6217f5a: Status 404 returned error can't find the container with id de9fa50110565b858bf3f19db79c98cb2ea78a54d8265b348668f88ea6217f5a Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.496336 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.588183 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-559467fcc6-pxz2z"] Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.591740 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-thpxd" event={"ID":"54f83e20-da38-4866-860b-a54d1f424fbd","Type":"ContainerDied","Data":"1b293a8f9231bc768e58c1f618dad447eee3b828a78ceae9907b267db9799ddc"} Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.591791 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b293a8f9231bc768e58c1f618dad447eee3b828a78ceae9907b267db9799ddc" Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.591791 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-thpxd" Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.607081 4760 generic.go:334] "Generic (PLEG): container finished" podID="9a69b0b5-d356-4cf9-85f3-148ce3f7ee14" containerID="81cc9abdcb4037186395d162fffe909169ccb3a96d43d46483455e6eed074c76" exitCode=0 Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.632152 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.081840404 podStartE2EDuration="5.632090603s" podCreationTimestamp="2026-01-23 18:54:08 +0000 UTC" firstStartedPulling="2026-01-23 18:54:10.347566041 +0000 UTC m=+3193.350023974" lastFinishedPulling="2026-01-23 18:54:12.89781624 +0000 UTC m=+3195.900274173" observedRunningTime="2026-01-23 18:54:13.627154387 +0000 UTC m=+3196.629612320" watchObservedRunningTime="2026-01-23 18:54:13.632090603 +0000 UTC m=+3196.634548536" Jan 23 18:54:13 crc kubenswrapper[4760]: W0123 18:54:13.646882 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdf60fb7_bbc2_41ed_92ca_f8970e75b4a6.slice/crio-ffda08cbc56e472daa8a65217721ed513152fc56c0d2ba4d3698c90bb748a6f4 WatchSource:0}: Error finding container ffda08cbc56e472daa8a65217721ed513152fc56c0d2ba4d3698c90bb748a6f4: Status 404 returned error can't find the container with id ffda08cbc56e472daa8a65217721ed513152fc56c0d2ba4d3698c90bb748a6f4 Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.667943 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.055340294 podStartE2EDuration="5.66792439s" podCreationTimestamp="2026-01-23 18:54:08 +0000 UTC" firstStartedPulling="2026-01-23 18:54:10.165659781 +0000 UTC m=+3193.168117714" lastFinishedPulling="2026-01-23 18:54:12.778243877 +0000 UTC m=+3195.780701810" observedRunningTime="2026-01-23 18:54:13.664084594 +0000 UTC m=+3196.666542527" watchObservedRunningTime="2026-01-23 18:54:13.66792439 +0000 UTC m=+3196.670382323" Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.703882 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c1b326-b143-48a0-a480-e794dc14e530" path="/var/lib/kubelet/pods/20c1b326-b143-48a0-a480-e794dc14e530/volumes" Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.706093 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"34fe3286-04be-40d9-a398-86c54b9025f1","Type":"ContainerStarted","Data":"266bb868eb4869a3ae7775cc2621bcafc118bc4cc62512832ceb78416d8834a6"} Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.706224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"34fe3286-04be-40d9-a398-86c54b9025f1","Type":"ContainerStarted","Data":"af03e71a920157d1934ca8a95d30dd54085d4b3d3296ca782ec6fad118f16c46"} Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.706313 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-aa79-account-create-update-lsdwl" event={"ID":"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14","Type":"ContainerDied","Data":"81cc9abdcb4037186395d162fffe909169ccb3a96d43d46483455e6eed074c76"} Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.706619 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cd9b568d4-rhb64" event={"ID":"0eb3feba-deb6-4186-8cc2-fcfbdedd6154","Type":"ContainerStarted","Data":"687361a56673a33557c7dedc77a626ffe4f4d9b5b300abdc94e9b093ed751e15"} Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.706708 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"dd897463-2b70-4dcd-9a51-442771b77ff4","Type":"ContainerStarted","Data":"01e26bf8c0a69a08321127ca5020812ad3ca30d933fb3ceb125514bb536dbbac"} Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.706815 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"dd897463-2b70-4dcd-9a51-442771b77ff4","Type":"ContainerStarted","Data":"cdfb01423bcaea35f3ffc4a1fb49f692c0da4155ae8f421d832c13b614194b99"} Jan 23 18:54:13 crc kubenswrapper[4760]: I0123 18:54:13.706898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd7757d4-8787-438c-bda4-eb895e83aae7","Type":"ContainerStarted","Data":"de9fa50110565b858bf3f19db79c98cb2ea78a54d8265b348668f88ea6217f5a"} Jan 23 18:54:14 crc kubenswrapper[4760]: I0123 18:54:14.282253 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:14 crc kubenswrapper[4760]: I0123 18:54:14.354090 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 23 18:54:14 crc kubenswrapper[4760]: I0123 18:54:14.678485 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559467fcc6-pxz2z" event={"ID":"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6","Type":"ContainerStarted","Data":"ffda08cbc56e472daa8a65217721ed513152fc56c0d2ba4d3698c90bb748a6f4"} Jan 23 18:54:14 crc kubenswrapper[4760]: I0123 18:54:14.681616 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0d9361de-8dde-4933-a80e-f8ccae325e7c","Type":"ContainerStarted","Data":"42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef"} Jan 23 18:54:14 crc kubenswrapper[4760]: I0123 18:54:14.681670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0d9361de-8dde-4933-a80e-f8ccae325e7c","Type":"ContainerStarted","Data":"c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f"} Jan 23 18:54:14 crc kubenswrapper[4760]: I0123 18:54:14.683482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd7757d4-8787-438c-bda4-eb895e83aae7","Type":"ContainerStarted","Data":"6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef"} Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.181816 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-aa79-account-create-update-lsdwl" Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.326309 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ppvm\" (UniqueName: \"kubernetes.io/projected/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-kube-api-access-4ppvm\") pod \"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14\" (UID: \"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14\") " Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.326846 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-operator-scripts\") pod \"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14\" (UID: \"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14\") " Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.328305 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a69b0b5-d356-4cf9-85f3-148ce3f7ee14" (UID: "9a69b0b5-d356-4cf9-85f3-148ce3f7ee14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.359716 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-kube-api-access-4ppvm" (OuterVolumeSpecName: "kube-api-access-4ppvm") pod "9a69b0b5-d356-4cf9-85f3-148ce3f7ee14" (UID: "9a69b0b5-d356-4cf9-85f3-148ce3f7ee14"). InnerVolumeSpecName "kube-api-access-4ppvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.430840 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ppvm\" (UniqueName: \"kubernetes.io/projected/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-kube-api-access-4ppvm\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.430877 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.700547 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd7757d4-8787-438c-bda4-eb895e83aae7","Type":"ContainerStarted","Data":"00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694"} Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.700806 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bd7757d4-8787-438c-bda4-eb895e83aae7" containerName="glance-log" containerID="cri-o://6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef" gracePeriod=30 Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.702959 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bd7757d4-8787-438c-bda4-eb895e83aae7" containerName="glance-httpd" containerID="cri-o://00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694" gracePeriod=30 Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.716069 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0d9361de-8dde-4933-a80e-f8ccae325e7c" containerName="glance-log" containerID="cri-o://c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f" gracePeriod=30 Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.716787 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-aa79-account-create-update-lsdwl" Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.717705 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-aa79-account-create-update-lsdwl" event={"ID":"9a69b0b5-d356-4cf9-85f3-148ce3f7ee14","Type":"ContainerDied","Data":"c3798fd751f5ccd544fbd5c73beabca8d7d4c652574c88c690b402c563812624"} Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.717754 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3798fd751f5ccd544fbd5c73beabca8d7d4c652574c88c690b402c563812624" Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.718446 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0d9361de-8dde-4933-a80e-f8ccae325e7c" containerName="glance-httpd" containerID="cri-o://42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef" gracePeriod=30 Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.751038 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.751017392 podStartE2EDuration="4.751017392s" podCreationTimestamp="2026-01-23 18:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:54:15.729220223 +0000 UTC m=+3198.731678156" watchObservedRunningTime="2026-01-23 18:54:15.751017392 +0000 UTC m=+3198.753475325" Jan 23 18:54:15 crc kubenswrapper[4760]: I0123 18:54:15.773528 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.773489031 podStartE2EDuration="6.773489031s" podCreationTimestamp="2026-01-23 18:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:54:15.75706784 +0000 UTC m=+3198.759525773" watchObservedRunningTime="2026-01-23 18:54:15.773489031 +0000 UTC m=+3198.775946984" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.299155 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.403902 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.457189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m24vc\" (UniqueName: \"kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-kube-api-access-m24vc\") pod \"bd7757d4-8787-438c-bda4-eb895e83aae7\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.457257 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-config-data\") pod \"bd7757d4-8787-438c-bda4-eb895e83aae7\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.457313 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-scripts\") pod \"bd7757d4-8787-438c-bda4-eb895e83aae7\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.457383 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-httpd-run\") pod \"bd7757d4-8787-438c-bda4-eb895e83aae7\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.457482 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-ceph\") pod \"bd7757d4-8787-438c-bda4-eb895e83aae7\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.457513 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-logs\") pod \"bd7757d4-8787-438c-bda4-eb895e83aae7\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.457549 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"bd7757d4-8787-438c-bda4-eb895e83aae7\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.457594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-combined-ca-bundle\") pod \"bd7757d4-8787-438c-bda4-eb895e83aae7\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.457634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-public-tls-certs\") pod \"bd7757d4-8787-438c-bda4-eb895e83aae7\" (UID: \"bd7757d4-8787-438c-bda4-eb895e83aae7\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.460538 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-logs" (OuterVolumeSpecName: "logs") pod "bd7757d4-8787-438c-bda4-eb895e83aae7" (UID: "bd7757d4-8787-438c-bda4-eb895e83aae7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.460716 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bd7757d4-8787-438c-bda4-eb895e83aae7" (UID: "bd7757d4-8787-438c-bda4-eb895e83aae7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.465352 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-scripts" (OuterVolumeSpecName: "scripts") pod "bd7757d4-8787-438c-bda4-eb895e83aae7" (UID: "bd7757d4-8787-438c-bda4-eb895e83aae7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.486806 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "bd7757d4-8787-438c-bda4-eb895e83aae7" (UID: "bd7757d4-8787-438c-bda4-eb895e83aae7"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.488070 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-kube-api-access-m24vc" (OuterVolumeSpecName: "kube-api-access-m24vc") pod "bd7757d4-8787-438c-bda4-eb895e83aae7" (UID: "bd7757d4-8787-438c-bda4-eb895e83aae7"). InnerVolumeSpecName "kube-api-access-m24vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.488610 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-ceph" (OuterVolumeSpecName: "ceph") pod "bd7757d4-8787-438c-bda4-eb895e83aae7" (UID: "bd7757d4-8787-438c-bda4-eb895e83aae7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.517083 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd7757d4-8787-438c-bda4-eb895e83aae7" (UID: "bd7757d4-8787-438c-bda4-eb895e83aae7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.531578 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-config-data" (OuterVolumeSpecName: "config-data") pod "bd7757d4-8787-438c-bda4-eb895e83aae7" (UID: "bd7757d4-8787-438c-bda4-eb895e83aae7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.544790 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bd7757d4-8787-438c-bda4-eb895e83aae7" (UID: "bd7757d4-8787-438c-bda4-eb895e83aae7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.560140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-config-data\") pod \"0d9361de-8dde-4933-a80e-f8ccae325e7c\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.560281 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"0d9361de-8dde-4933-a80e-f8ccae325e7c\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.560430 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-httpd-run\") pod \"0d9361de-8dde-4933-a80e-f8ccae325e7c\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.560467 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-logs\") pod \"0d9361de-8dde-4933-a80e-f8ccae325e7c\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.560545 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-ceph\") pod \"0d9361de-8dde-4933-a80e-f8ccae325e7c\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.560646 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-combined-ca-bundle\") pod \"0d9361de-8dde-4933-a80e-f8ccae325e7c\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.560698 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-scripts\") pod \"0d9361de-8dde-4933-a80e-f8ccae325e7c\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.560749 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-internal-tls-certs\") pod \"0d9361de-8dde-4933-a80e-f8ccae325e7c\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.560796 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlnr8\" (UniqueName: \"kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-kube-api-access-hlnr8\") pod \"0d9361de-8dde-4933-a80e-f8ccae325e7c\" (UID: \"0d9361de-8dde-4933-a80e-f8ccae325e7c\") " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561257 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561280 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561307 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561321 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561335 4760 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561347 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m24vc\" (UniqueName: \"kubernetes.io/projected/bd7757d4-8787-438c-bda4-eb895e83aae7-kube-api-access-m24vc\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561359 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561370 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd7757d4-8787-438c-bda4-eb895e83aae7-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561380 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd7757d4-8787-438c-bda4-eb895e83aae7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561738 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-logs" (OuterVolumeSpecName: "logs") pod "0d9361de-8dde-4933-a80e-f8ccae325e7c" (UID: "0d9361de-8dde-4933-a80e-f8ccae325e7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.561758 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0d9361de-8dde-4933-a80e-f8ccae325e7c" (UID: "0d9361de-8dde-4933-a80e-f8ccae325e7c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.565260 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-ceph" (OuterVolumeSpecName: "ceph") pod "0d9361de-8dde-4933-a80e-f8ccae325e7c" (UID: "0d9361de-8dde-4933-a80e-f8ccae325e7c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.566617 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "0d9361de-8dde-4933-a80e-f8ccae325e7c" (UID: "0d9361de-8dde-4933-a80e-f8ccae325e7c"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.572036 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-kube-api-access-hlnr8" (OuterVolumeSpecName: "kube-api-access-hlnr8") pod "0d9361de-8dde-4933-a80e-f8ccae325e7c" (UID: "0d9361de-8dde-4933-a80e-f8ccae325e7c"). InnerVolumeSpecName "kube-api-access-hlnr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.579438 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-scripts" (OuterVolumeSpecName: "scripts") pod "0d9361de-8dde-4933-a80e-f8ccae325e7c" (UID: "0d9361de-8dde-4933-a80e-f8ccae325e7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.591650 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.595698 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:54:16 crc kubenswrapper[4760]: E0123 18:54:16.596108 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.597603 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d9361de-8dde-4933-a80e-f8ccae325e7c" (UID: "0d9361de-8dde-4933-a80e-f8ccae325e7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.624364 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0d9361de-8dde-4933-a80e-f8ccae325e7c" (UID: "0d9361de-8dde-4933-a80e-f8ccae325e7c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.625035 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-config-data" (OuterVolumeSpecName: "config-data") pod "0d9361de-8dde-4933-a80e-f8ccae325e7c" (UID: "0d9361de-8dde-4933-a80e-f8ccae325e7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.668602 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.668636 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.668668 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.668678 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.668689 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d9361de-8dde-4933-a80e-f8ccae325e7c-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.668697 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.668705 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.668714 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.668723 4760 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d9361de-8dde-4933-a80e-f8ccae325e7c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.668731 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlnr8\" (UniqueName: \"kubernetes.io/projected/0d9361de-8dde-4933-a80e-f8ccae325e7c-kube-api-access-hlnr8\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.691912 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.733156 4760 generic.go:334] "Generic (PLEG): container finished" podID="0d9361de-8dde-4933-a80e-f8ccae325e7c" containerID="42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef" exitCode=0 Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.733194 4760 generic.go:334] "Generic (PLEG): container finished" podID="0d9361de-8dde-4933-a80e-f8ccae325e7c" containerID="c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f" exitCode=143 Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.733240 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0d9361de-8dde-4933-a80e-f8ccae325e7c","Type":"ContainerDied","Data":"42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef"} Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.733277 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0d9361de-8dde-4933-a80e-f8ccae325e7c","Type":"ContainerDied","Data":"c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f"} Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.733295 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0d9361de-8dde-4933-a80e-f8ccae325e7c","Type":"ContainerDied","Data":"eccafa1d00a5bdf81e897ccd92eebb791ddb289274f03e5cf52b9841a9ed73b6"} Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.733312 4760 scope.go:117] "RemoveContainer" containerID="42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.733625 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.741325 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd7757d4-8787-438c-bda4-eb895e83aae7" containerID="00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694" exitCode=143 Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.741365 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd7757d4-8787-438c-bda4-eb895e83aae7" containerID="6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef" exitCode=143 Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.741394 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd7757d4-8787-438c-bda4-eb895e83aae7","Type":"ContainerDied","Data":"00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694"} Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.741497 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd7757d4-8787-438c-bda4-eb895e83aae7","Type":"ContainerDied","Data":"6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef"} Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.741515 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd7757d4-8787-438c-bda4-eb895e83aae7","Type":"ContainerDied","Data":"de9fa50110565b858bf3f19db79c98cb2ea78a54d8265b348668f88ea6217f5a"} Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.741588 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.771881 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.831677 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.854132 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.863481 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.874602 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.883475 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:54:16 crc kubenswrapper[4760]: E0123 18:54:16.883929 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9361de-8dde-4933-a80e-f8ccae325e7c" containerName="glance-log" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.883956 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9361de-8dde-4933-a80e-f8ccae325e7c" containerName="glance-log" Jan 23 18:54:16 crc kubenswrapper[4760]: E0123 18:54:16.883981 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7757d4-8787-438c-bda4-eb895e83aae7" containerName="glance-log" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.883989 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7757d4-8787-438c-bda4-eb895e83aae7" containerName="glance-log" Jan 23 18:54:16 crc kubenswrapper[4760]: E0123 18:54:16.884003 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f83e20-da38-4866-860b-a54d1f424fbd" containerName="mariadb-database-create" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.884012 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f83e20-da38-4866-860b-a54d1f424fbd" containerName="mariadb-database-create" Jan 23 18:54:16 crc kubenswrapper[4760]: E0123 18:54:16.884033 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a69b0b5-d356-4cf9-85f3-148ce3f7ee14" containerName="mariadb-account-create-update" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.884042 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a69b0b5-d356-4cf9-85f3-148ce3f7ee14" containerName="mariadb-account-create-update" Jan 23 18:54:16 crc kubenswrapper[4760]: E0123 18:54:16.884060 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7757d4-8787-438c-bda4-eb895e83aae7" containerName="glance-httpd" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.884067 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7757d4-8787-438c-bda4-eb895e83aae7" containerName="glance-httpd" Jan 23 18:54:16 crc kubenswrapper[4760]: E0123 18:54:16.884079 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9361de-8dde-4933-a80e-f8ccae325e7c" containerName="glance-httpd" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.884087 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9361de-8dde-4933-a80e-f8ccae325e7c" containerName="glance-httpd" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.884302 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a69b0b5-d356-4cf9-85f3-148ce3f7ee14" containerName="mariadb-account-create-update" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.884319 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9361de-8dde-4933-a80e-f8ccae325e7c" containerName="glance-log" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.884343 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7757d4-8787-438c-bda4-eb895e83aae7" containerName="glance-httpd" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.884361 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f83e20-da38-4866-860b-a54d1f424fbd" containerName="mariadb-database-create" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.884370 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9361de-8dde-4933-a80e-f8ccae325e7c" containerName="glance-httpd" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.884383 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7757d4-8787-438c-bda4-eb895e83aae7" containerName="glance-log" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.885664 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.893924 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.894255 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.894467 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.894758 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ttxth" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.902332 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.915490 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.917467 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.922946 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.923188 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 18:54:16 crc kubenswrapper[4760]: I0123 18:54:16.929555 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099138 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099196 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099219 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099254 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-config-data\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099317 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099355 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099372 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099456 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099494 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s62s\" (UniqueName: \"kubernetes.io/projected/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-kube-api-access-8s62s\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099597 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf5m7\" (UniqueName: \"kubernetes.io/projected/058e59c7-9277-4925-810f-105817254775-kube-api-access-wf5m7\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-logs\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099708 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/058e59c7-9277-4925-810f-105817254775-logs\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099761 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-scripts\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099791 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099815 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/058e59c7-9277-4925-810f-105817254775-ceph\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.099902 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/058e59c7-9277-4925-810f-105817254775-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203626 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s62s\" (UniqueName: \"kubernetes.io/projected/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-kube-api-access-8s62s\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf5m7\" (UniqueName: \"kubernetes.io/projected/058e59c7-9277-4925-810f-105817254775-kube-api-access-wf5m7\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203742 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-logs\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203760 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/058e59c7-9277-4925-810f-105817254775-logs\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203791 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-scripts\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/058e59c7-9277-4925-810f-105817254775-ceph\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203848 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203869 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/058e59c7-9277-4925-810f-105817254775-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203906 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.203977 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-config-data\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.204019 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.204059 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.204083 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.204560 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-logs\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.204706 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.204872 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/058e59c7-9277-4925-810f-105817254775-logs\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.204987 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.205145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.211392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.212017 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/058e59c7-9277-4925-810f-105817254775-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.212172 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/058e59c7-9277-4925-810f-105817254775-ceph\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.212615 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.213138 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.215577 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-scripts\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.222519 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.225501 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.225600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.227524 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/058e59c7-9277-4925-810f-105817254775-config-data\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.228836 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.233628 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s62s\" (UniqueName: \"kubernetes.io/projected/cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9-kube-api-access-8s62s\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.248598 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf5m7\" (UniqueName: \"kubernetes.io/projected/058e59c7-9277-4925-810f-105817254775-kube-api-access-wf5m7\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.266337 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.315210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"058e59c7-9277-4925-810f-105817254775\") " pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.514745 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.551268 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.615597 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9361de-8dde-4933-a80e-f8ccae325e7c" path="/var/lib/kubelet/pods/0d9361de-8dde-4933-a80e-f8ccae325e7c/volumes" Jan 23 18:54:17 crc kubenswrapper[4760]: I0123 18:54:17.616540 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7757d4-8787-438c-bda4-eb895e83aae7" path="/var/lib/kubelet/pods/bd7757d4-8787-438c-bda4-eb895e83aae7/volumes" Jan 23 18:54:19 crc kubenswrapper[4760]: I0123 18:54:19.783789 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 23 18:54:19 crc kubenswrapper[4760]: I0123 18:54:19.789161 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.333674 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-tzrlf"] Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.335430 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.338115 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.339659 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-jfm98" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.344582 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-tzrlf"] Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.475068 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvtp\" (UniqueName: \"kubernetes.io/projected/e09f2af9-285a-461d-b04c-77b23410dc37-kube-api-access-fhvtp\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.475492 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-job-config-data\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.475541 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-combined-ca-bundle\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.475654 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-config-data\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.577892 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvtp\" (UniqueName: \"kubernetes.io/projected/e09f2af9-285a-461d-b04c-77b23410dc37-kube-api-access-fhvtp\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.577957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-job-config-data\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.578017 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-combined-ca-bundle\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.579326 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-config-data\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.584469 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-job-config-data\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.584785 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-combined-ca-bundle\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.591379 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-config-data\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.594795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvtp\" (UniqueName: \"kubernetes.io/projected/e09f2af9-285a-461d-b04c-77b23410dc37-kube-api-access-fhvtp\") pod \"manila-db-sync-tzrlf\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:20 crc kubenswrapper[4760]: I0123 18:54:20.656208 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-tzrlf" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.508048 4760 scope.go:117] "RemoveContainer" containerID="c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.777647 4760 scope.go:117] "RemoveContainer" containerID="42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef" Jan 23 18:54:22 crc kubenswrapper[4760]: E0123 18:54:22.781443 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef\": container with ID starting with 42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef not found: ID does not exist" containerID="42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.781484 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef"} err="failed to get container status \"42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef\": rpc error: code = NotFound desc = could not find container \"42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef\": container with ID starting with 42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef not found: ID does not exist" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.781509 4760 scope.go:117] "RemoveContainer" containerID="c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f" Jan 23 18:54:22 crc kubenswrapper[4760]: E0123 18:54:22.781940 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f\": container with ID starting with c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f not found: ID does not exist" containerID="c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.781959 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f"} err="failed to get container status \"c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f\": rpc error: code = NotFound desc = could not find container \"c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f\": container with ID starting with c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f not found: ID does not exist" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.781971 4760 scope.go:117] "RemoveContainer" containerID="42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.782184 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef"} err="failed to get container status \"42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef\": rpc error: code = NotFound desc = could not find container \"42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef\": container with ID starting with 42320f58919d4e4e7f698274f3eae51ce53fe659b734318604f475d6e55efdef not found: ID does not exist" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.782202 4760 scope.go:117] "RemoveContainer" containerID="c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.782397 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f"} err="failed to get container status \"c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f\": rpc error: code = NotFound desc = could not find container \"c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f\": container with ID starting with c40cfa4e3044287c199a721c7cf6cbae3704be588dbe94f2e1956b005a06cc5f not found: ID does not exist" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.782427 4760 scope.go:117] "RemoveContainer" containerID="00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.928397 4760 scope.go:117] "RemoveContainer" containerID="6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.972619 4760 scope.go:117] "RemoveContainer" containerID="00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694" Jan 23 18:54:22 crc kubenswrapper[4760]: E0123 18:54:22.973145 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694\": container with ID starting with 00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694 not found: ID does not exist" containerID="00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.973193 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694"} err="failed to get container status \"00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694\": rpc error: code = NotFound desc = could not find container \"00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694\": container with ID starting with 00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694 not found: ID does not exist" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.973224 4760 scope.go:117] "RemoveContainer" containerID="6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef" Jan 23 18:54:22 crc kubenswrapper[4760]: E0123 18:54:22.973757 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef\": container with ID starting with 6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef not found: ID does not exist" containerID="6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.973788 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef"} err="failed to get container status \"6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef\": rpc error: code = NotFound desc = could not find container \"6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef\": container with ID starting with 6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef not found: ID does not exist" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.973810 4760 scope.go:117] "RemoveContainer" containerID="00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.975325 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694"} err="failed to get container status \"00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694\": rpc error: code = NotFound desc = could not find container \"00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694\": container with ID starting with 00e0b5636f6fe58da569f27db05b3d59d3d053d3033e6a79109718a784d98694 not found: ID does not exist" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.975369 4760 scope.go:117] "RemoveContainer" containerID="6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef" Jan 23 18:54:22 crc kubenswrapper[4760]: I0123 18:54:22.975884 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef"} err="failed to get container status \"6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef\": rpc error: code = NotFound desc = could not find container \"6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef\": container with ID starting with 6fd511ae5fa7b5638e48b9f419682d4d84c400799fc6c2cc4c31073d920867ef not found: ID does not exist" Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.315399 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.418644 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.819175 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"058e59c7-9277-4925-810f-105817254775","Type":"ContainerStarted","Data":"0a6ed57c4cb72d6f3e5b475a50f2ad0b3e39f9b361361dad7149ede5313ecc78"} Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.822027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f66bb87ff-nxjgw" event={"ID":"32c783fd-6a4a-4256-91ad-01ecf9276f23","Type":"ContainerStarted","Data":"1e97db216e903db40fa56be614b6c2e37dfa2922af21451a32791a196d6e7f5a"} Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.822291 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f66bb87ff-nxjgw" event={"ID":"32c783fd-6a4a-4256-91ad-01ecf9276f23","Type":"ContainerStarted","Data":"b5fcc64e1a698fcce7e4de74cffaae40499b1e8dd64d7c8f1b33af560fc5b437"} Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.822136 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f66bb87ff-nxjgw" podUID="32c783fd-6a4a-4256-91ad-01ecf9276f23" containerName="horizon-log" containerID="cri-o://b5fcc64e1a698fcce7e4de74cffaae40499b1e8dd64d7c8f1b33af560fc5b437" gracePeriod=30 Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.822144 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f66bb87ff-nxjgw" podUID="32c783fd-6a4a-4256-91ad-01ecf9276f23" containerName="horizon" containerID="cri-o://1e97db216e903db40fa56be614b6c2e37dfa2922af21451a32791a196d6e7f5a" gracePeriod=30 Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.827017 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559467fcc6-pxz2z" event={"ID":"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6","Type":"ContainerStarted","Data":"8dc371e69b6be3d6b8ca6aae77e5148c72f1bc3f8aedc10576b40b73d4289043"} Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.827067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559467fcc6-pxz2z" event={"ID":"fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6","Type":"ContainerStarted","Data":"8be42dd7a3b1d1c5dfa813c4446dcc87a95ecd8e1377b69e62774c460c39715b"} Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.831481 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bcbfd84fc-95mr6" event={"ID":"d347a83b-4418-4917-9bec-155d15168aca","Type":"ContainerStarted","Data":"b6fe03b52ec41f16a7037f45674bbce73623cbc4232f22ec5b7504cf051ab966"} Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.831532 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bcbfd84fc-95mr6" event={"ID":"d347a83b-4418-4917-9bec-155d15168aca","Type":"ContainerStarted","Data":"a243c7efa406ced0564b05c114e61bd891870e3298933942c562d8b240ec28dd"} Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.831601 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bcbfd84fc-95mr6" podUID="d347a83b-4418-4917-9bec-155d15168aca" containerName="horizon-log" containerID="cri-o://a243c7efa406ced0564b05c114e61bd891870e3298933942c562d8b240ec28dd" gracePeriod=30 Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.831644 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bcbfd84fc-95mr6" podUID="d347a83b-4418-4917-9bec-155d15168aca" containerName="horizon" containerID="cri-o://b6fe03b52ec41f16a7037f45674bbce73623cbc4232f22ec5b7504cf051ab966" gracePeriod=30 Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.834902 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cd9b568d4-rhb64" event={"ID":"0eb3feba-deb6-4186-8cc2-fcfbdedd6154","Type":"ContainerStarted","Data":"392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3"} Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.834949 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cd9b568d4-rhb64" event={"ID":"0eb3feba-deb6-4186-8cc2-fcfbdedd6154","Type":"ContainerStarted","Data":"4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1"} Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.844816 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9","Type":"ContainerStarted","Data":"6984c33f902e50e93ee791b66894bab28409f9c071f58377d9fe19ed8899dc41"} Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.847815 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f66bb87ff-nxjgw" podStartSLOduration=3.491418337 podStartE2EDuration="14.847798785s" podCreationTimestamp="2026-01-23 18:54:09 +0000 UTC" firstStartedPulling="2026-01-23 18:54:11.4505906 +0000 UTC m=+3194.453048543" lastFinishedPulling="2026-01-23 18:54:22.806971038 +0000 UTC m=+3205.809428991" observedRunningTime="2026-01-23 18:54:23.84038482 +0000 UTC m=+3206.842842753" watchObservedRunningTime="2026-01-23 18:54:23.847798785 +0000 UTC m=+3206.850256728" Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.872678 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-559467fcc6-pxz2z" podStartSLOduration=2.608073025 podStartE2EDuration="11.87265608s" podCreationTimestamp="2026-01-23 18:54:12 +0000 UTC" firstStartedPulling="2026-01-23 18:54:13.665270357 +0000 UTC m=+3196.667728290" lastFinishedPulling="2026-01-23 18:54:22.929853412 +0000 UTC m=+3205.932311345" observedRunningTime="2026-01-23 18:54:23.869213514 +0000 UTC m=+3206.871671457" watchObservedRunningTime="2026-01-23 18:54:23.87265608 +0000 UTC m=+3206.875114013" Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.899914 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5bcbfd84fc-95mr6" podStartSLOduration=3.4251209400000002 podStartE2EDuration="14.899892069s" podCreationTimestamp="2026-01-23 18:54:09 +0000 UTC" firstStartedPulling="2026-01-23 18:54:11.456675608 +0000 UTC m=+3194.459133581" lastFinishedPulling="2026-01-23 18:54:22.931446777 +0000 UTC m=+3205.933904710" observedRunningTime="2026-01-23 18:54:23.88938377 +0000 UTC m=+3206.891841703" watchObservedRunningTime="2026-01-23 18:54:23.899892069 +0000 UTC m=+3206.902350002" Jan 23 18:54:23 crc kubenswrapper[4760]: I0123 18:54:23.922370 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cd9b568d4-rhb64" podStartSLOduration=2.474254667 podStartE2EDuration="11.922344647s" podCreationTimestamp="2026-01-23 18:54:12 +0000 UTC" firstStartedPulling="2026-01-23 18:54:13.483295865 +0000 UTC m=+3196.485753798" lastFinishedPulling="2026-01-23 18:54:22.931385845 +0000 UTC m=+3205.933843778" observedRunningTime="2026-01-23 18:54:23.91299279 +0000 UTC m=+3206.915450743" watchObservedRunningTime="2026-01-23 18:54:23.922344647 +0000 UTC m=+3206.924802580" Jan 23 18:54:24 crc kubenswrapper[4760]: I0123 18:54:24.125595 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-tzrlf"] Jan 23 18:54:24 crc kubenswrapper[4760]: I0123 18:54:24.860685 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"058e59c7-9277-4925-810f-105817254775","Type":"ContainerStarted","Data":"eb62f88ca9ff2f35098fd69c782210f7cb335b3a3b7d6bb00405acaf7282e089"} Jan 23 18:54:24 crc kubenswrapper[4760]: I0123 18:54:24.861330 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"058e59c7-9277-4925-810f-105817254775","Type":"ContainerStarted","Data":"5a8553de44c0668eae26bb457dbc939ae2b6a1bec7c97ecf2a09b8d9085750d0"} Jan 23 18:54:24 crc kubenswrapper[4760]: I0123 18:54:24.864084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-tzrlf" event={"ID":"e09f2af9-285a-461d-b04c-77b23410dc37","Type":"ContainerStarted","Data":"a4daae0b36b32105f15851a1b8d7c2de91af814bf3a28c075fb88da353189694"} Jan 23 18:54:24 crc kubenswrapper[4760]: I0123 18:54:24.868119 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9","Type":"ContainerStarted","Data":"6c3e4a69bb1d41d659edc76ee871b7df66377f7ff15104c3577c496a885e52c0"} Jan 23 18:54:24 crc kubenswrapper[4760]: I0123 18:54:24.868153 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9","Type":"ContainerStarted","Data":"57a6e70c1e7c5b1e0a6e5be32779dc33a4d8231ee19cb3b7c12261955baf9a61"} Jan 23 18:54:24 crc kubenswrapper[4760]: I0123 18:54:24.886814 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.886797561 podStartE2EDuration="8.886797561s" podCreationTimestamp="2026-01-23 18:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:54:24.881709831 +0000 UTC m=+3207.884167764" watchObservedRunningTime="2026-01-23 18:54:24.886797561 +0000 UTC m=+3207.889255484" Jan 23 18:54:24 crc kubenswrapper[4760]: I0123 18:54:24.916620 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.916599981 podStartE2EDuration="8.916599981s" podCreationTimestamp="2026-01-23 18:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:54:24.908644462 +0000 UTC m=+3207.911102405" watchObservedRunningTime="2026-01-23 18:54:24.916599981 +0000 UTC m=+3207.919057914" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.515290 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.516657 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.551905 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.553741 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.579682 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.579762 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.610589 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.655771 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.905128 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.905181 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.905194 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:27 crc kubenswrapper[4760]: I0123 18:54:27.905208 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:30 crc kubenswrapper[4760]: I0123 18:54:30.370960 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:30 crc kubenswrapper[4760]: I0123 18:54:30.467776 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:31 crc kubenswrapper[4760]: I0123 18:54:31.601475 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:54:31 crc kubenswrapper[4760]: E0123 18:54:31.601933 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:54:32 crc kubenswrapper[4760]: I0123 18:54:32.821724 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:32 crc kubenswrapper[4760]: I0123 18:54:32.822016 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:32 crc kubenswrapper[4760]: I0123 18:54:32.928985 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:32 crc kubenswrapper[4760]: I0123 18:54:32.929266 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:33 crc kubenswrapper[4760]: I0123 18:54:33.455666 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 18:54:35 crc kubenswrapper[4760]: I0123 18:54:35.488593 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 18:54:40 crc kubenswrapper[4760]: E0123 18:54:40.569213 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-manila-api:current-podified" Jan 23 18:54:40 crc kubenswrapper[4760]: E0123 18:54:40.569871 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manila-db-sync,Image:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,Command:[/bin/bash],Args:[-c sleep 0 && /usr/bin/manila-manage --config-dir /etc/manila/manila.conf.d db sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:job-config-data,ReadOnly:true,MountPath:/etc/manila/manila.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhvtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42429,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42429,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-db-sync-tzrlf_openstack(e09f2af9-285a-461d-b04c-77b23410dc37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:54:40 crc kubenswrapper[4760]: E0123 18:54:40.571290 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/manila-db-sync-tzrlf" podUID="e09f2af9-285a-461d-b04c-77b23410dc37" Jan 23 18:54:41 crc kubenswrapper[4760]: E0123 18:54:41.056776 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-manila-api:current-podified\\\"\"" pod="openstack/manila-db-sync-tzrlf" podUID="e09f2af9-285a-461d-b04c-77b23410dc37" Jan 23 18:54:42 crc kubenswrapper[4760]: I0123 18:54:42.822876 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-cd9b568d4-rhb64" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Jan 23 18:54:42 crc kubenswrapper[4760]: I0123 18:54:42.931455 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-559467fcc6-pxz2z" podUID="fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 23 18:54:46 crc kubenswrapper[4760]: I0123 18:54:46.596235 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:54:46 crc kubenswrapper[4760]: E0123 18:54:46.596941 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:54:49 crc kubenswrapper[4760]: I0123 18:54:49.785898 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:49 crc kubenswrapper[4760]: I0123 18:54:49.790982 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.185834 4760 generic.go:334] "Generic (PLEG): container finished" podID="d347a83b-4418-4917-9bec-155d15168aca" containerID="b6fe03b52ec41f16a7037f45674bbce73623cbc4232f22ec5b7504cf051ab966" exitCode=137 Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.186314 4760 generic.go:334] "Generic (PLEG): container finished" podID="d347a83b-4418-4917-9bec-155d15168aca" containerID="a243c7efa406ced0564b05c114e61bd891870e3298933942c562d8b240ec28dd" exitCode=137 Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.186034 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bcbfd84fc-95mr6" event={"ID":"d347a83b-4418-4917-9bec-155d15168aca","Type":"ContainerDied","Data":"b6fe03b52ec41f16a7037f45674bbce73623cbc4232f22ec5b7504cf051ab966"} Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.186373 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bcbfd84fc-95mr6" event={"ID":"d347a83b-4418-4917-9bec-155d15168aca","Type":"ContainerDied","Data":"a243c7efa406ced0564b05c114e61bd891870e3298933942c562d8b240ec28dd"} Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.198884 4760 generic.go:334] "Generic (PLEG): container finished" podID="32c783fd-6a4a-4256-91ad-01ecf9276f23" containerID="1e97db216e903db40fa56be614b6c2e37dfa2922af21451a32791a196d6e7f5a" exitCode=137 Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.198914 4760 generic.go:334] "Generic (PLEG): container finished" podID="32c783fd-6a4a-4256-91ad-01ecf9276f23" containerID="b5fcc64e1a698fcce7e4de74cffaae40499b1e8dd64d7c8f1b33af560fc5b437" exitCode=137 Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.198963 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f66bb87ff-nxjgw" event={"ID":"32c783fd-6a4a-4256-91ad-01ecf9276f23","Type":"ContainerDied","Data":"1e97db216e903db40fa56be614b6c2e37dfa2922af21451a32791a196d6e7f5a"} Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.198988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f66bb87ff-nxjgw" event={"ID":"32c783fd-6a4a-4256-91ad-01ecf9276f23","Type":"ContainerDied","Data":"b5fcc64e1a698fcce7e4de74cffaae40499b1e8dd64d7c8f1b33af560fc5b437"} Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.229074 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.284185 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.374790 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rvff\" (UniqueName: \"kubernetes.io/projected/d347a83b-4418-4917-9bec-155d15168aca-kube-api-access-2rvff\") pod \"d347a83b-4418-4917-9bec-155d15168aca\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.374895 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/32c783fd-6a4a-4256-91ad-01ecf9276f23-horizon-secret-key\") pod \"32c783fd-6a4a-4256-91ad-01ecf9276f23\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.374975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d347a83b-4418-4917-9bec-155d15168aca-horizon-secret-key\") pod \"d347a83b-4418-4917-9bec-155d15168aca\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.375039 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz9mq\" (UniqueName: \"kubernetes.io/projected/32c783fd-6a4a-4256-91ad-01ecf9276f23-kube-api-access-sz9mq\") pod \"32c783fd-6a4a-4256-91ad-01ecf9276f23\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.375083 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c783fd-6a4a-4256-91ad-01ecf9276f23-logs\") pod \"32c783fd-6a4a-4256-91ad-01ecf9276f23\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.375111 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d347a83b-4418-4917-9bec-155d15168aca-logs\") pod \"d347a83b-4418-4917-9bec-155d15168aca\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.375149 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-scripts\") pod \"d347a83b-4418-4917-9bec-155d15168aca\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.375166 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-config-data\") pod \"d347a83b-4418-4917-9bec-155d15168aca\" (UID: \"d347a83b-4418-4917-9bec-155d15168aca\") " Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.375187 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-scripts\") pod \"32c783fd-6a4a-4256-91ad-01ecf9276f23\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.375271 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-config-data\") pod \"32c783fd-6a4a-4256-91ad-01ecf9276f23\" (UID: \"32c783fd-6a4a-4256-91ad-01ecf9276f23\") " Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.376015 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d347a83b-4418-4917-9bec-155d15168aca-logs" (OuterVolumeSpecName: "logs") pod "d347a83b-4418-4917-9bec-155d15168aca" (UID: "d347a83b-4418-4917-9bec-155d15168aca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.378104 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32c783fd-6a4a-4256-91ad-01ecf9276f23-logs" (OuterVolumeSpecName: "logs") pod "32c783fd-6a4a-4256-91ad-01ecf9276f23" (UID: "32c783fd-6a4a-4256-91ad-01ecf9276f23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.380123 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d347a83b-4418-4917-9bec-155d15168aca-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d347a83b-4418-4917-9bec-155d15168aca" (UID: "d347a83b-4418-4917-9bec-155d15168aca"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.380199 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d347a83b-4418-4917-9bec-155d15168aca-kube-api-access-2rvff" (OuterVolumeSpecName: "kube-api-access-2rvff") pod "d347a83b-4418-4917-9bec-155d15168aca" (UID: "d347a83b-4418-4917-9bec-155d15168aca"). InnerVolumeSpecName "kube-api-access-2rvff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.380643 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c783fd-6a4a-4256-91ad-01ecf9276f23-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "32c783fd-6a4a-4256-91ad-01ecf9276f23" (UID: "32c783fd-6a4a-4256-91ad-01ecf9276f23"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.383577 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c783fd-6a4a-4256-91ad-01ecf9276f23-kube-api-access-sz9mq" (OuterVolumeSpecName: "kube-api-access-sz9mq") pod "32c783fd-6a4a-4256-91ad-01ecf9276f23" (UID: "32c783fd-6a4a-4256-91ad-01ecf9276f23"). InnerVolumeSpecName "kube-api-access-sz9mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.399083 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-scripts" (OuterVolumeSpecName: "scripts") pod "d347a83b-4418-4917-9bec-155d15168aca" (UID: "d347a83b-4418-4917-9bec-155d15168aca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.400563 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-config-data" (OuterVolumeSpecName: "config-data") pod "32c783fd-6a4a-4256-91ad-01ecf9276f23" (UID: "32c783fd-6a4a-4256-91ad-01ecf9276f23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.401834 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-scripts" (OuterVolumeSpecName: "scripts") pod "32c783fd-6a4a-4256-91ad-01ecf9276f23" (UID: "32c783fd-6a4a-4256-91ad-01ecf9276f23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.408767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-config-data" (OuterVolumeSpecName: "config-data") pod "d347a83b-4418-4917-9bec-155d15168aca" (UID: "d347a83b-4418-4917-9bec-155d15168aca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.477942 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c783fd-6a4a-4256-91ad-01ecf9276f23-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.477971 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d347a83b-4418-4917-9bec-155d15168aca-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.477981 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.477989 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d347a83b-4418-4917-9bec-155d15168aca-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.478007 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.478021 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32c783fd-6a4a-4256-91ad-01ecf9276f23-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.478031 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rvff\" (UniqueName: \"kubernetes.io/projected/d347a83b-4418-4917-9bec-155d15168aca-kube-api-access-2rvff\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.478046 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/32c783fd-6a4a-4256-91ad-01ecf9276f23-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.478060 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d347a83b-4418-4917-9bec-155d15168aca-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.478070 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz9mq\" (UniqueName: \"kubernetes.io/projected/32c783fd-6a4a-4256-91ad-01ecf9276f23-kube-api-access-sz9mq\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:54 crc kubenswrapper[4760]: I0123 18:54:54.994788 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.115353 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.208361 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bcbfd84fc-95mr6" Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.208350 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bcbfd84fc-95mr6" event={"ID":"d347a83b-4418-4917-9bec-155d15168aca","Type":"ContainerDied","Data":"3e90a66fe948d9e1b4901a3e3ec5d2aeee6eb5c367354708f29de44897573b17"} Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.208453 4760 scope.go:117] "RemoveContainer" containerID="b6fe03b52ec41f16a7037f45674bbce73623cbc4232f22ec5b7504cf051ab966" Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.212268 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f66bb87ff-nxjgw" event={"ID":"32c783fd-6a4a-4256-91ad-01ecf9276f23","Type":"ContainerDied","Data":"7f10302888273ffbfbf5a38f05f1d43da319748dbc452aad42ba2cd45982a308"} Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.212288 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f66bb87ff-nxjgw" Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.217835 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-tzrlf" event={"ID":"e09f2af9-285a-461d-b04c-77b23410dc37","Type":"ContainerStarted","Data":"7ce1a19c996f75fc5ef11421c54cc2a46057b1f3f177f8efb7a3f891aecc4ddc"} Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.236477 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-tzrlf" podStartSLOduration=5.332624169 podStartE2EDuration="35.23645845s" podCreationTimestamp="2026-01-23 18:54:20 +0000 UTC" firstStartedPulling="2026-01-23 18:54:24.131259952 +0000 UTC m=+3207.133717885" lastFinishedPulling="2026-01-23 18:54:54.035094233 +0000 UTC m=+3237.037552166" observedRunningTime="2026-01-23 18:54:55.233334024 +0000 UTC m=+3238.235791967" watchObservedRunningTime="2026-01-23 18:54:55.23645845 +0000 UTC m=+3238.238916393" Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.270486 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bcbfd84fc-95mr6"] Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.280477 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5bcbfd84fc-95mr6"] Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.291950 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f66bb87ff-nxjgw"] Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.300941 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f66bb87ff-nxjgw"] Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.400942 4760 scope.go:117] "RemoveContainer" containerID="a243c7efa406ced0564b05c114e61bd891870e3298933942c562d8b240ec28dd" Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.419031 4760 scope.go:117] "RemoveContainer" containerID="1e97db216e903db40fa56be614b6c2e37dfa2922af21451a32791a196d6e7f5a" Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.577709 4760 scope.go:117] "RemoveContainer" containerID="b5fcc64e1a698fcce7e4de74cffaae40499b1e8dd64d7c8f1b33af560fc5b437" Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.607342 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c783fd-6a4a-4256-91ad-01ecf9276f23" path="/var/lib/kubelet/pods/32c783fd-6a4a-4256-91ad-01ecf9276f23/volumes" Jan 23 18:54:55 crc kubenswrapper[4760]: I0123 18:54:55.608299 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d347a83b-4418-4917-9bec-155d15168aca" path="/var/lib/kubelet/pods/d347a83b-4418-4917-9bec-155d15168aca/volumes" Jan 23 18:54:56 crc kubenswrapper[4760]: I0123 18:54:56.660547 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:54:56 crc kubenswrapper[4760]: I0123 18:54:56.951190 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-559467fcc6-pxz2z" Jan 23 18:54:57 crc kubenswrapper[4760]: I0123 18:54:57.015818 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cd9b568d4-rhb64"] Jan 23 18:54:57 crc kubenswrapper[4760]: I0123 18:54:57.241567 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cd9b568d4-rhb64" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon-log" containerID="cri-o://4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1" gracePeriod=30 Jan 23 18:54:57 crc kubenswrapper[4760]: I0123 18:54:57.242141 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cd9b568d4-rhb64" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon" containerID="cri-o://392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3" gracePeriod=30 Jan 23 18:55:00 crc kubenswrapper[4760]: I0123 18:55:00.595515 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:55:00 crc kubenswrapper[4760]: E0123 18:55:00.596248 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:55:01 crc kubenswrapper[4760]: I0123 18:55:01.278106 4760 generic.go:334] "Generic (PLEG): container finished" podID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerID="392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3" exitCode=0 Jan 23 18:55:01 crc kubenswrapper[4760]: I0123 18:55:01.278162 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cd9b568d4-rhb64" event={"ID":"0eb3feba-deb6-4186-8cc2-fcfbdedd6154","Type":"ContainerDied","Data":"392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3"} Jan 23 18:55:02 crc kubenswrapper[4760]: I0123 18:55:02.822152 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cd9b568d4-rhb64" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Jan 23 18:55:08 crc kubenswrapper[4760]: I0123 18:55:08.348960 4760 generic.go:334] "Generic (PLEG): container finished" podID="e09f2af9-285a-461d-b04c-77b23410dc37" containerID="7ce1a19c996f75fc5ef11421c54cc2a46057b1f3f177f8efb7a3f891aecc4ddc" exitCode=0 Jan 23 18:55:08 crc kubenswrapper[4760]: I0123 18:55:08.349084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-tzrlf" event={"ID":"e09f2af9-285a-461d-b04c-77b23410dc37","Type":"ContainerDied","Data":"7ce1a19c996f75fc5ef11421c54cc2a46057b1f3f177f8efb7a3f891aecc4ddc"} Jan 23 18:55:09 crc kubenswrapper[4760]: I0123 18:55:09.760688 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-tzrlf" Jan 23 18:55:09 crc kubenswrapper[4760]: I0123 18:55:09.922019 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-config-data\") pod \"e09f2af9-285a-461d-b04c-77b23410dc37\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " Jan 23 18:55:09 crc kubenswrapper[4760]: I0123 18:55:09.922423 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-combined-ca-bundle\") pod \"e09f2af9-285a-461d-b04c-77b23410dc37\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " Jan 23 18:55:09 crc kubenswrapper[4760]: I0123 18:55:09.922468 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-job-config-data\") pod \"e09f2af9-285a-461d-b04c-77b23410dc37\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " Jan 23 18:55:09 crc kubenswrapper[4760]: I0123 18:55:09.922659 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhvtp\" (UniqueName: \"kubernetes.io/projected/e09f2af9-285a-461d-b04c-77b23410dc37-kube-api-access-fhvtp\") pod \"e09f2af9-285a-461d-b04c-77b23410dc37\" (UID: \"e09f2af9-285a-461d-b04c-77b23410dc37\") " Jan 23 18:55:09 crc kubenswrapper[4760]: I0123 18:55:09.927320 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "e09f2af9-285a-461d-b04c-77b23410dc37" (UID: "e09f2af9-285a-461d-b04c-77b23410dc37"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:09 crc kubenswrapper[4760]: I0123 18:55:09.929429 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e09f2af9-285a-461d-b04c-77b23410dc37-kube-api-access-fhvtp" (OuterVolumeSpecName: "kube-api-access-fhvtp") pod "e09f2af9-285a-461d-b04c-77b23410dc37" (UID: "e09f2af9-285a-461d-b04c-77b23410dc37"). InnerVolumeSpecName "kube-api-access-fhvtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:55:09 crc kubenswrapper[4760]: I0123 18:55:09.930109 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-config-data" (OuterVolumeSpecName: "config-data") pod "e09f2af9-285a-461d-b04c-77b23410dc37" (UID: "e09f2af9-285a-461d-b04c-77b23410dc37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:09 crc kubenswrapper[4760]: I0123 18:55:09.947903 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e09f2af9-285a-461d-b04c-77b23410dc37" (UID: "e09f2af9-285a-461d-b04c-77b23410dc37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.025523 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.025552 4760 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.025562 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhvtp\" (UniqueName: \"kubernetes.io/projected/e09f2af9-285a-461d-b04c-77b23410dc37-kube-api-access-fhvtp\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.025572 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09f2af9-285a-461d-b04c-77b23410dc37-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.370630 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-tzrlf" event={"ID":"e09f2af9-285a-461d-b04c-77b23410dc37","Type":"ContainerDied","Data":"a4daae0b36b32105f15851a1b8d7c2de91af814bf3a28c075fb88da353189694"} Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.370670 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4daae0b36b32105f15851a1b8d7c2de91af814bf3a28c075fb88da353189694" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.370793 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-tzrlf" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.661548 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 18:55:10 crc kubenswrapper[4760]: E0123 18:55:10.661924 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c783fd-6a4a-4256-91ad-01ecf9276f23" containerName="horizon" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.661938 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c783fd-6a4a-4256-91ad-01ecf9276f23" containerName="horizon" Jan 23 18:55:10 crc kubenswrapper[4760]: E0123 18:55:10.661947 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09f2af9-285a-461d-b04c-77b23410dc37" containerName="manila-db-sync" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.661953 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09f2af9-285a-461d-b04c-77b23410dc37" containerName="manila-db-sync" Jan 23 18:55:10 crc kubenswrapper[4760]: E0123 18:55:10.661972 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d347a83b-4418-4917-9bec-155d15168aca" containerName="horizon-log" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.661978 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d347a83b-4418-4917-9bec-155d15168aca" containerName="horizon-log" Jan 23 18:55:10 crc kubenswrapper[4760]: E0123 18:55:10.661985 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d347a83b-4418-4917-9bec-155d15168aca" containerName="horizon" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.661990 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d347a83b-4418-4917-9bec-155d15168aca" containerName="horizon" Jan 23 18:55:10 crc kubenswrapper[4760]: E0123 18:55:10.662000 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c783fd-6a4a-4256-91ad-01ecf9276f23" containerName="horizon-log" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.662005 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c783fd-6a4a-4256-91ad-01ecf9276f23" containerName="horizon-log" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.662206 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d347a83b-4418-4917-9bec-155d15168aca" containerName="horizon-log" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.662224 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09f2af9-285a-461d-b04c-77b23410dc37" containerName="manila-db-sync" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.662232 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c783fd-6a4a-4256-91ad-01ecf9276f23" containerName="horizon-log" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.662240 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c783fd-6a4a-4256-91ad-01ecf9276f23" containerName="horizon" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.662249 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d347a83b-4418-4917-9bec-155d15168aca" containerName="horizon" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.663146 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.666762 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.666845 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-jfm98" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.667049 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.667155 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.675603 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.739913 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.746591 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.750025 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.750341 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.835535 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-px9rg"] Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.839808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-scripts\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.839854 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-ceph\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.839891 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840028 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840107 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swxgs\" (UniqueName: \"kubernetes.io/projected/5be053f1-728e-4cfa-bb8f-0994be12d30b-kube-api-access-swxgs\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840168 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840219 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840587 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840621 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5be053f1-728e-4cfa-bb8f-0994be12d30b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840699 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8k6\" (UniqueName: \"kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-kube-api-access-kw8k6\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840743 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.840806 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-scripts\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.847799 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.850975 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-px9rg"] Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.943091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.943155 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.943184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8k6\" (UniqueName: \"kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-kube-api-access-kw8k6\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.943209 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.943237 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-config\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.943264 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.943328 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-scripts\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.943332 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.943368 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkqrd\" (UniqueName: \"kubernetes.io/projected/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-kube-api-access-gkqrd\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.944444 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.944739 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-ceph\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.944768 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-scripts\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.944818 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.944869 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.944925 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swxgs\" (UniqueName: \"kubernetes.io/projected/5be053f1-728e-4cfa-bb8f-0994be12d30b-kube-api-access-swxgs\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.944980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.945028 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.945054 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.945140 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.945226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.945264 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5be053f1-728e-4cfa-bb8f-0994be12d30b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.945386 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5be053f1-728e-4cfa-bb8f-0994be12d30b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.945564 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.950318 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.951218 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-ceph\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.951662 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.958444 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-scripts\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.972227 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.972373 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.972659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-scripts\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.972951 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.973338 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.986202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swxgs\" (UniqueName: \"kubernetes.io/projected/5be053f1-728e-4cfa-bb8f-0994be12d30b-kube-api-access-swxgs\") pod \"manila-scheduler-0\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:10 crc kubenswrapper[4760]: I0123 18:55:10.986202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8k6\" (UniqueName: \"kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-kube-api-access-kw8k6\") pod \"manila-share-share1-0\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:10.997468 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:10.998881 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.004133 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.006924 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.046950 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkqrd\" (UniqueName: \"kubernetes.io/projected/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-kube-api-access-gkqrd\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.047047 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.047127 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.047174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.047270 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.047302 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-config\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.048332 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-config\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.049284 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.052341 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.052891 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.062718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.064764 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkqrd\" (UniqueName: \"kubernetes.io/projected/a52c0919-287b-48b2-83f6-9bc4fb33eaa6-kube-api-access-gkqrd\") pod \"dnsmasq-dns-76b5fdb995-px9rg\" (UID: \"a52c0919-287b-48b2-83f6-9bc4fb33eaa6\") " pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.072754 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.149904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d701bfb-9ebe-4d18-8239-17237ad7b466-logs\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.149946 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgqm2\" (UniqueName: \"kubernetes.io/projected/7d701bfb-9ebe-4d18-8239-17237ad7b466-kube-api-access-mgqm2\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.149981 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.150032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-scripts\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.150049 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data-custom\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.150126 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d701bfb-9ebe-4d18-8239-17237ad7b466-etc-machine-id\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.150152 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.175954 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.253081 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d701bfb-9ebe-4d18-8239-17237ad7b466-etc-machine-id\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.253399 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.253485 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d701bfb-9ebe-4d18-8239-17237ad7b466-logs\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.253520 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgqm2\" (UniqueName: \"kubernetes.io/projected/7d701bfb-9ebe-4d18-8239-17237ad7b466-kube-api-access-mgqm2\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.253568 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.253626 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-scripts\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.253652 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data-custom\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.255239 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d701bfb-9ebe-4d18-8239-17237ad7b466-logs\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.253228 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d701bfb-9ebe-4d18-8239-17237ad7b466-etc-machine-id\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.264980 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data-custom\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.268542 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-scripts\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.271227 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.277026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.278923 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgqm2\" (UniqueName: \"kubernetes.io/projected/7d701bfb-9ebe-4d18-8239-17237ad7b466-kube-api-access-mgqm2\") pod \"manila-api-0\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.281666 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.478442 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.688448 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.791884 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-px9rg"] Jan 23 18:55:11 crc kubenswrapper[4760]: I0123 18:55:11.930721 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 18:55:12 crc kubenswrapper[4760]: I0123 18:55:12.161646 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 23 18:55:12 crc kubenswrapper[4760]: I0123 18:55:12.390035 4760 generic.go:334] "Generic (PLEG): container finished" podID="a52c0919-287b-48b2-83f6-9bc4fb33eaa6" containerID="3df17e2345c072687f25e3e13eef0baa6f32345dcaee59bf768beade0682296d" exitCode=0 Jan 23 18:55:12 crc kubenswrapper[4760]: I0123 18:55:12.390090 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" event={"ID":"a52c0919-287b-48b2-83f6-9bc4fb33eaa6","Type":"ContainerDied","Data":"3df17e2345c072687f25e3e13eef0baa6f32345dcaee59bf768beade0682296d"} Jan 23 18:55:12 crc kubenswrapper[4760]: I0123 18:55:12.390115 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" event={"ID":"a52c0919-287b-48b2-83f6-9bc4fb33eaa6","Type":"ContainerStarted","Data":"482f3fcab5c5de64208dc0e04cb6781149351c65a49c41a1cad065f4f1fc146b"} Jan 23 18:55:12 crc kubenswrapper[4760]: I0123 18:55:12.392092 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5be053f1-728e-4cfa-bb8f-0994be12d30b","Type":"ContainerStarted","Data":"d84af4d38258c1bc2a1a309a302d5447eb7127b3a039c15ca5c016b1801d5e06"} Jan 23 18:55:12 crc kubenswrapper[4760]: I0123 18:55:12.394135 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"4a695858-441e-4d03-9a5a-2c336605cab7","Type":"ContainerStarted","Data":"7684db9a2447a3e472192785f6787862efda9aba73eac668187becc4cc20876e"} Jan 23 18:55:12 crc kubenswrapper[4760]: I0123 18:55:12.396592 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7d701bfb-9ebe-4d18-8239-17237ad7b466","Type":"ContainerStarted","Data":"fcaf53c039234f6ebe22ba1438325128000a7a2fc28715e4ec5a2d341fed8cb3"} Jan 23 18:55:12 crc kubenswrapper[4760]: I0123 18:55:12.596350 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:55:12 crc kubenswrapper[4760]: E0123 18:55:12.596805 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:55:12 crc kubenswrapper[4760]: I0123 18:55:12.822568 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cd9b568d4-rhb64" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Jan 23 18:55:13 crc kubenswrapper[4760]: I0123 18:55:13.500023 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" event={"ID":"a52c0919-287b-48b2-83f6-9bc4fb33eaa6","Type":"ContainerStarted","Data":"c49636d2b1ba932237e264f5fc45a3ec49ff071d4f286ca030476025828ed801"} Jan 23 18:55:13 crc kubenswrapper[4760]: I0123 18:55:13.501006 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:13 crc kubenswrapper[4760]: I0123 18:55:13.516777 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5be053f1-728e-4cfa-bb8f-0994be12d30b","Type":"ContainerStarted","Data":"9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125"} Jan 23 18:55:13 crc kubenswrapper[4760]: I0123 18:55:13.556207 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7d701bfb-9ebe-4d18-8239-17237ad7b466","Type":"ContainerStarted","Data":"2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64"} Jan 23 18:55:13 crc kubenswrapper[4760]: I0123 18:55:13.556261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7d701bfb-9ebe-4d18-8239-17237ad7b466","Type":"ContainerStarted","Data":"f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360"} Jan 23 18:55:13 crc kubenswrapper[4760]: I0123 18:55:13.558607 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 23 18:55:13 crc kubenswrapper[4760]: I0123 18:55:13.580229 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" podStartSLOduration=3.580204214 podStartE2EDuration="3.580204214s" podCreationTimestamp="2026-01-23 18:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:55:13.553199511 +0000 UTC m=+3256.555657454" watchObservedRunningTime="2026-01-23 18:55:13.580204214 +0000 UTC m=+3256.582662147" Jan 23 18:55:13 crc kubenswrapper[4760]: I0123 18:55:13.623849 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.6238252060000002 podStartE2EDuration="3.623825206s" podCreationTimestamp="2026-01-23 18:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:55:13.60327618 +0000 UTC m=+3256.605734123" watchObservedRunningTime="2026-01-23 18:55:13.623825206 +0000 UTC m=+3256.626283139" Jan 23 18:55:13 crc kubenswrapper[4760]: I0123 18:55:13.992316 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 23 18:55:14 crc kubenswrapper[4760]: I0123 18:55:14.577031 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5be053f1-728e-4cfa-bb8f-0994be12d30b","Type":"ContainerStarted","Data":"d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15"} Jan 23 18:55:14 crc kubenswrapper[4760]: I0123 18:55:14.605881 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.849373298 podStartE2EDuration="4.605854093s" podCreationTimestamp="2026-01-23 18:55:10 +0000 UTC" firstStartedPulling="2026-01-23 18:55:11.939008743 +0000 UTC m=+3254.941466676" lastFinishedPulling="2026-01-23 18:55:12.695489528 +0000 UTC m=+3255.697947471" observedRunningTime="2026-01-23 18:55:14.595649652 +0000 UTC m=+3257.598107575" watchObservedRunningTime="2026-01-23 18:55:14.605854093 +0000 UTC m=+3257.608312036" Jan 23 18:55:15 crc kubenswrapper[4760]: I0123 18:55:15.585577 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="7d701bfb-9ebe-4d18-8239-17237ad7b466" containerName="manila-api-log" containerID="cri-o://f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360" gracePeriod=30 Jan 23 18:55:15 crc kubenswrapper[4760]: I0123 18:55:15.585655 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="7d701bfb-9ebe-4d18-8239-17237ad7b466" containerName="manila-api" containerID="cri-o://2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64" gracePeriod=30 Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.309656 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.484561 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgqm2\" (UniqueName: \"kubernetes.io/projected/7d701bfb-9ebe-4d18-8239-17237ad7b466-kube-api-access-mgqm2\") pod \"7d701bfb-9ebe-4d18-8239-17237ad7b466\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.485062 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-combined-ca-bundle\") pod \"7d701bfb-9ebe-4d18-8239-17237ad7b466\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.485090 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d701bfb-9ebe-4d18-8239-17237ad7b466-etc-machine-id\") pod \"7d701bfb-9ebe-4d18-8239-17237ad7b466\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.485157 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data-custom\") pod \"7d701bfb-9ebe-4d18-8239-17237ad7b466\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.485193 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d701bfb-9ebe-4d18-8239-17237ad7b466-logs\") pod \"7d701bfb-9ebe-4d18-8239-17237ad7b466\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.485208 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d701bfb-9ebe-4d18-8239-17237ad7b466-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7d701bfb-9ebe-4d18-8239-17237ad7b466" (UID: "7d701bfb-9ebe-4d18-8239-17237ad7b466"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.485248 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data\") pod \"7d701bfb-9ebe-4d18-8239-17237ad7b466\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.485307 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-scripts\") pod \"7d701bfb-9ebe-4d18-8239-17237ad7b466\" (UID: \"7d701bfb-9ebe-4d18-8239-17237ad7b466\") " Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.485705 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d701bfb-9ebe-4d18-8239-17237ad7b466-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.491449 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d701bfb-9ebe-4d18-8239-17237ad7b466-logs" (OuterVolumeSpecName: "logs") pod "7d701bfb-9ebe-4d18-8239-17237ad7b466" (UID: "7d701bfb-9ebe-4d18-8239-17237ad7b466"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.499554 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-scripts" (OuterVolumeSpecName: "scripts") pod "7d701bfb-9ebe-4d18-8239-17237ad7b466" (UID: "7d701bfb-9ebe-4d18-8239-17237ad7b466"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.499948 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7d701bfb-9ebe-4d18-8239-17237ad7b466" (UID: "7d701bfb-9ebe-4d18-8239-17237ad7b466"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.504641 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d701bfb-9ebe-4d18-8239-17237ad7b466-kube-api-access-mgqm2" (OuterVolumeSpecName: "kube-api-access-mgqm2") pod "7d701bfb-9ebe-4d18-8239-17237ad7b466" (UID: "7d701bfb-9ebe-4d18-8239-17237ad7b466"). InnerVolumeSpecName "kube-api-access-mgqm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.523323 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d701bfb-9ebe-4d18-8239-17237ad7b466" (UID: "7d701bfb-9ebe-4d18-8239-17237ad7b466"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.549705 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data" (OuterVolumeSpecName: "config-data") pod "7d701bfb-9ebe-4d18-8239-17237ad7b466" (UID: "7d701bfb-9ebe-4d18-8239-17237ad7b466"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.593118 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.593168 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.593180 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d701bfb-9ebe-4d18-8239-17237ad7b466-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.593192 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.593202 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d701bfb-9ebe-4d18-8239-17237ad7b466-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.593216 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgqm2\" (UniqueName: \"kubernetes.io/projected/7d701bfb-9ebe-4d18-8239-17237ad7b466-kube-api-access-mgqm2\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.596780 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d701bfb-9ebe-4d18-8239-17237ad7b466" containerID="2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64" exitCode=0 Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.596808 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d701bfb-9ebe-4d18-8239-17237ad7b466" containerID="f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360" exitCode=143 Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.596835 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7d701bfb-9ebe-4d18-8239-17237ad7b466","Type":"ContainerDied","Data":"2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64"} Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.596863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7d701bfb-9ebe-4d18-8239-17237ad7b466","Type":"ContainerDied","Data":"f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360"} Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.596897 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7d701bfb-9ebe-4d18-8239-17237ad7b466","Type":"ContainerDied","Data":"fcaf53c039234f6ebe22ba1438325128000a7a2fc28715e4ec5a2d341fed8cb3"} Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.596921 4760 scope.go:117] "RemoveContainer" containerID="2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.597169 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.631050 4760 scope.go:117] "RemoveContainer" containerID="f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.661531 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.671481 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.688124 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 23 18:55:16 crc kubenswrapper[4760]: E0123 18:55:16.688611 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d701bfb-9ebe-4d18-8239-17237ad7b466" containerName="manila-api-log" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.688631 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d701bfb-9ebe-4d18-8239-17237ad7b466" containerName="manila-api-log" Jan 23 18:55:16 crc kubenswrapper[4760]: E0123 18:55:16.688656 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d701bfb-9ebe-4d18-8239-17237ad7b466" containerName="manila-api" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.688664 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d701bfb-9ebe-4d18-8239-17237ad7b466" containerName="manila-api" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.689102 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d701bfb-9ebe-4d18-8239-17237ad7b466" containerName="manila-api-log" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.689123 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d701bfb-9ebe-4d18-8239-17237ad7b466" containerName="manila-api" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.690423 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.694890 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.695074 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.695221 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.695569 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.698333 4760 scope.go:117] "RemoveContainer" containerID="2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64" Jan 23 18:55:16 crc kubenswrapper[4760]: E0123 18:55:16.699881 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64\": container with ID starting with 2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64 not found: ID does not exist" containerID="2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.699918 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64"} err="failed to get container status \"2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64\": rpc error: code = NotFound desc = could not find container \"2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64\": container with ID starting with 2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64 not found: ID does not exist" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.699944 4760 scope.go:117] "RemoveContainer" containerID="f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360" Jan 23 18:55:16 crc kubenswrapper[4760]: E0123 18:55:16.700274 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360\": container with ID starting with f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360 not found: ID does not exist" containerID="f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.700298 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360"} err="failed to get container status \"f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360\": rpc error: code = NotFound desc = could not find container \"f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360\": container with ID starting with f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360 not found: ID does not exist" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.700316 4760 scope.go:117] "RemoveContainer" containerID="2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.700580 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64"} err="failed to get container status \"2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64\": rpc error: code = NotFound desc = could not find container \"2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64\": container with ID starting with 2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64 not found: ID does not exist" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.700599 4760 scope.go:117] "RemoveContainer" containerID="f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.700829 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360"} err="failed to get container status \"f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360\": rpc error: code = NotFound desc = could not find container \"f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360\": container with ID starting with f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360 not found: ID does not exist" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.797583 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c64c390-9956-4595-b1b9-9bf78be32e68-logs\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.797720 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-public-tls-certs\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.797844 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-scripts\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.797922 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-config-data\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.797954 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-config-data-custom\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.798076 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.798230 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.798472 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c64c390-9956-4595-b1b9-9bf78be32e68-etc-machine-id\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.798555 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcshr\" (UniqueName: \"kubernetes.io/projected/0c64c390-9956-4595-b1b9-9bf78be32e68-kube-api-access-fcshr\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.900662 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-config-data-custom\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.900710 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.900769 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.900828 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c64c390-9956-4595-b1b9-9bf78be32e68-etc-machine-id\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.900846 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcshr\" (UniqueName: \"kubernetes.io/projected/0c64c390-9956-4595-b1b9-9bf78be32e68-kube-api-access-fcshr\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.900896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c64c390-9956-4595-b1b9-9bf78be32e68-logs\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.900920 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-public-tls-certs\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.900945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-scripts\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.900967 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-config-data\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.901699 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c64c390-9956-4595-b1b9-9bf78be32e68-etc-machine-id\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.905264 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-config-data\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.905986 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-public-tls-certs\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.907608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-scripts\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.908780 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c64c390-9956-4595-b1b9-9bf78be32e68-logs\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.909932 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.910400 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-config-data-custom\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.927961 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c64c390-9956-4595-b1b9-9bf78be32e68-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:16 crc kubenswrapper[4760]: I0123 18:55:16.932112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcshr\" (UniqueName: \"kubernetes.io/projected/0c64c390-9956-4595-b1b9-9bf78be32e68-kube-api-access-fcshr\") pod \"manila-api-0\" (UID: \"0c64c390-9956-4595-b1b9-9bf78be32e68\") " pod="openstack/manila-api-0" Jan 23 18:55:17 crc kubenswrapper[4760]: I0123 18:55:17.066743 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 18:55:17 crc kubenswrapper[4760]: I0123 18:55:17.619799 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d701bfb-9ebe-4d18-8239-17237ad7b466" path="/var/lib/kubelet/pods/7d701bfb-9ebe-4d18-8239-17237ad7b466/volumes" Jan 23 18:55:17 crc kubenswrapper[4760]: I0123 18:55:17.745885 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.674539 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.675079 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="ceilometer-central-agent" containerID="cri-o://f6a4ea87cdd476e1377e6714d4de4c3a83825a3416607aad298d45c657cd6415" gracePeriod=30 Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.676252 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="sg-core" containerID="cri-o://3d0dded1d13fd4d3025916f62b73b555e265044d7a8bf6bb247b475f54988f81" gracePeriod=30 Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.676435 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="proxy-httpd" containerID="cri-o://9e7f2d50aa31fd4a62cf26a280283851d1d9b026a8dd316737a8ee7dfa7cfcae" gracePeriod=30 Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.676492 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="ceilometer-notification-agent" containerID="cri-o://2c105ce41b18317f612ca83d24aac7fc32c45e35b9c011ed6097378a4c14519c" gracePeriod=30 Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.788026 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zgdnh"] Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.790768 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.796459 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgdnh"] Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.980568 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv9f5\" (UniqueName: \"kubernetes.io/projected/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-kube-api-access-rv9f5\") pod \"redhat-marketplace-zgdnh\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.980636 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-utilities\") pod \"redhat-marketplace-zgdnh\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:18 crc kubenswrapper[4760]: I0123 18:55:18.980667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-catalog-content\") pod \"redhat-marketplace-zgdnh\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:19 crc kubenswrapper[4760]: I0123 18:55:19.082595 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9f5\" (UniqueName: \"kubernetes.io/projected/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-kube-api-access-rv9f5\") pod \"redhat-marketplace-zgdnh\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:19 crc kubenswrapper[4760]: I0123 18:55:19.082649 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-utilities\") pod \"redhat-marketplace-zgdnh\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:19 crc kubenswrapper[4760]: I0123 18:55:19.082676 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-catalog-content\") pod \"redhat-marketplace-zgdnh\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:19 crc kubenswrapper[4760]: I0123 18:55:19.083143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-utilities\") pod \"redhat-marketplace-zgdnh\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:19 crc kubenswrapper[4760]: I0123 18:55:19.083222 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-catalog-content\") pod \"redhat-marketplace-zgdnh\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:19 crc kubenswrapper[4760]: I0123 18:55:19.127556 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv9f5\" (UniqueName: \"kubernetes.io/projected/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-kube-api-access-rv9f5\") pod \"redhat-marketplace-zgdnh\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:19 crc kubenswrapper[4760]: I0123 18:55:19.408336 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:19 crc kubenswrapper[4760]: I0123 18:55:19.647623 4760 generic.go:334] "Generic (PLEG): container finished" podID="127ea512-daf4-4310-b214-6ce12ba9adad" containerID="3d0dded1d13fd4d3025916f62b73b555e265044d7a8bf6bb247b475f54988f81" exitCode=2 Jan 23 18:55:19 crc kubenswrapper[4760]: I0123 18:55:19.647666 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"127ea512-daf4-4310-b214-6ce12ba9adad","Type":"ContainerDied","Data":"3d0dded1d13fd4d3025916f62b73b555e265044d7a8bf6bb247b475f54988f81"} Jan 23 18:55:20 crc kubenswrapper[4760]: I0123 18:55:20.659490 4760 generic.go:334] "Generic (PLEG): container finished" podID="127ea512-daf4-4310-b214-6ce12ba9adad" containerID="9e7f2d50aa31fd4a62cf26a280283851d1d9b026a8dd316737a8ee7dfa7cfcae" exitCode=0 Jan 23 18:55:20 crc kubenswrapper[4760]: I0123 18:55:20.659871 4760 generic.go:334] "Generic (PLEG): container finished" podID="127ea512-daf4-4310-b214-6ce12ba9adad" containerID="f6a4ea87cdd476e1377e6714d4de4c3a83825a3416607aad298d45c657cd6415" exitCode=0 Jan 23 18:55:20 crc kubenswrapper[4760]: I0123 18:55:20.659573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"127ea512-daf4-4310-b214-6ce12ba9adad","Type":"ContainerDied","Data":"9e7f2d50aa31fd4a62cf26a280283851d1d9b026a8dd316737a8ee7dfa7cfcae"} Jan 23 18:55:20 crc kubenswrapper[4760]: I0123 18:55:20.659930 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"127ea512-daf4-4310-b214-6ce12ba9adad","Type":"ContainerDied","Data":"f6a4ea87cdd476e1377e6714d4de4c3a83825a3416607aad298d45c657cd6415"} Jan 23 18:55:21 crc kubenswrapper[4760]: I0123 18:55:21.177698 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-px9rg" Jan 23 18:55:21 crc kubenswrapper[4760]: I0123 18:55:21.234522 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-8t5dz"] Jan 23 18:55:21 crc kubenswrapper[4760]: I0123 18:55:21.234855 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" podUID="8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" containerName="dnsmasq-dns" containerID="cri-o://89994adfba93773ab056eb44bf39c12d27142bbb4da8cf5f012c950c011e6853" gracePeriod=10 Jan 23 18:55:21 crc kubenswrapper[4760]: I0123 18:55:21.286433 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 23 18:55:21 crc kubenswrapper[4760]: I0123 18:55:21.673490 4760 generic.go:334] "Generic (PLEG): container finished" podID="8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" containerID="89994adfba93773ab056eb44bf39c12d27142bbb4da8cf5f012c950c011e6853" exitCode=0 Jan 23 18:55:21 crc kubenswrapper[4760]: I0123 18:55:21.673596 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" event={"ID":"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa","Type":"ContainerDied","Data":"89994adfba93773ab056eb44bf39c12d27142bbb4da8cf5f012c950c011e6853"} Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.295987 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.448908 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-config\") pod \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.449002 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-dns-svc\") pod \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.449033 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-sb\") pod \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.449123 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-openstack-edpm-ipam\") pod \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.449229 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpqdh\" (UniqueName: \"kubernetes.io/projected/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-kube-api-access-dpqdh\") pod \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.449297 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-nb\") pod \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\" (UID: \"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa\") " Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.468266 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-kube-api-access-dpqdh" (OuterVolumeSpecName: "kube-api-access-dpqdh") pod "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" (UID: "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa"). InnerVolumeSpecName "kube-api-access-dpqdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.513453 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" (UID: "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.520132 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" (UID: "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.524024 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" (UID: "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.525699 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" (UID: "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.526529 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-config" (OuterVolumeSpecName: "config") pod "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" (UID: "8d1899a1-9b41-4804-8aa1-6fc4d97a94aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.555707 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.556002 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpqdh\" (UniqueName: \"kubernetes.io/projected/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-kube-api-access-dpqdh\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.556013 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.556025 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.556034 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.556042 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.696752 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" event={"ID":"8d1899a1-9b41-4804-8aa1-6fc4d97a94aa","Type":"ContainerDied","Data":"883654ece72feb9aeacc8c16ca733dfc5f1d024e6a3b7568d384d279fdae0d22"} Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.696839 4760 scope.go:117] "RemoveContainer" containerID="89994adfba93773ab056eb44bf39c12d27142bbb4da8cf5f012c950c011e6853" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.696977 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-8t5dz" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.700852 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgdnh"] Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.706258 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0c64c390-9956-4595-b1b9-9bf78be32e68","Type":"ContainerStarted","Data":"c08f953507dc7f00cc9991639c91f5c21d3d1c4bd9209777a2506ff21e0d483f"} Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.823976 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cd9b568d4-rhb64" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.824769 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.901776 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-8t5dz"] Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.915205 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-8t5dz"] Jan 23 18:55:22 crc kubenswrapper[4760]: I0123 18:55:22.920312 4760 scope.go:117] "RemoveContainer" containerID="b6e27062c709616b88da49d5a34d97fcadbfbfbc0a9cb8324921aaa0246eb53e" Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.595087 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:55:23 crc kubenswrapper[4760]: E0123 18:55:23.597722 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.620961 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" path="/var/lib/kubelet/pods/8d1899a1-9b41-4804-8aa1-6fc4d97a94aa/volumes" Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.738724 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0c64c390-9956-4595-b1b9-9bf78be32e68","Type":"ContainerStarted","Data":"e6480644b24ce80e021aec700f3d6ead15b775bc33c56b30a35b6aeac5aad402"} Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.738769 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0c64c390-9956-4595-b1b9-9bf78be32e68","Type":"ContainerStarted","Data":"b553d3e48131d63b5e4a7992564ac76603aec1e6bb60a3d23c0ad8589d8aa69d"} Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.740168 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.747297 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"4a695858-441e-4d03-9a5a-2c336605cab7","Type":"ContainerStarted","Data":"5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea"} Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.773337 4760 generic.go:334] "Generic (PLEG): container finished" podID="127ea512-daf4-4310-b214-6ce12ba9adad" containerID="2c105ce41b18317f612ca83d24aac7fc32c45e35b9c011ed6097378a4c14519c" exitCode=0 Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.773760 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"127ea512-daf4-4310-b214-6ce12ba9adad","Type":"ContainerDied","Data":"2c105ce41b18317f612ca83d24aac7fc32c45e35b9c011ed6097378a4c14519c"} Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.775393 4760 generic.go:334] "Generic (PLEG): container finished" podID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerID="98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8" exitCode=0 Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.775463 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgdnh" event={"ID":"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b","Type":"ContainerDied","Data":"98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8"} Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.775486 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgdnh" event={"ID":"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b","Type":"ContainerStarted","Data":"54df908dda044ffdc9a8f9425cede2dc42677d59a045d7eb5e490f24f4e25e37"} Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.777484 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=7.777467628 podStartE2EDuration="7.777467628s" podCreationTimestamp="2026-01-23 18:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:55:23.764571212 +0000 UTC m=+3266.767029155" watchObservedRunningTime="2026-01-23 18:55:23.777467628 +0000 UTC m=+3266.779925561" Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.778940 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:55:23 crc kubenswrapper[4760]: I0123 18:55:23.894180 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.003163 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-combined-ca-bundle\") pod \"127ea512-daf4-4310-b214-6ce12ba9adad\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.003250 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-run-httpd\") pod \"127ea512-daf4-4310-b214-6ce12ba9adad\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.003359 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-config-data\") pod \"127ea512-daf4-4310-b214-6ce12ba9adad\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.003802 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "127ea512-daf4-4310-b214-6ce12ba9adad" (UID: "127ea512-daf4-4310-b214-6ce12ba9adad"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.004140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-scripts\") pod \"127ea512-daf4-4310-b214-6ce12ba9adad\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.004197 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-ceilometer-tls-certs\") pod \"127ea512-daf4-4310-b214-6ce12ba9adad\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.004244 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-sg-core-conf-yaml\") pod \"127ea512-daf4-4310-b214-6ce12ba9adad\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.004343 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qpfj\" (UniqueName: \"kubernetes.io/projected/127ea512-daf4-4310-b214-6ce12ba9adad-kube-api-access-6qpfj\") pod \"127ea512-daf4-4310-b214-6ce12ba9adad\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.004432 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-log-httpd\") pod \"127ea512-daf4-4310-b214-6ce12ba9adad\" (UID: \"127ea512-daf4-4310-b214-6ce12ba9adad\") " Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.005352 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.024724 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "127ea512-daf4-4310-b214-6ce12ba9adad" (UID: "127ea512-daf4-4310-b214-6ce12ba9adad"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.052124 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-scripts" (OuterVolumeSpecName: "scripts") pod "127ea512-daf4-4310-b214-6ce12ba9adad" (UID: "127ea512-daf4-4310-b214-6ce12ba9adad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.096483 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/127ea512-daf4-4310-b214-6ce12ba9adad-kube-api-access-6qpfj" (OuterVolumeSpecName: "kube-api-access-6qpfj") pod "127ea512-daf4-4310-b214-6ce12ba9adad" (UID: "127ea512-daf4-4310-b214-6ce12ba9adad"). InnerVolumeSpecName "kube-api-access-6qpfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.105252 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "127ea512-daf4-4310-b214-6ce12ba9adad" (UID: "127ea512-daf4-4310-b214-6ce12ba9adad"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.115354 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.115394 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qpfj\" (UniqueName: \"kubernetes.io/projected/127ea512-daf4-4310-b214-6ce12ba9adad-kube-api-access-6qpfj\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.143498 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/127ea512-daf4-4310-b214-6ce12ba9adad-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.143549 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.218706 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "127ea512-daf4-4310-b214-6ce12ba9adad" (UID: "127ea512-daf4-4310-b214-6ce12ba9adad"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.244980 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.275523 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "127ea512-daf4-4310-b214-6ce12ba9adad" (UID: "127ea512-daf4-4310-b214-6ce12ba9adad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.321742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-config-data" (OuterVolumeSpecName: "config-data") pod "127ea512-daf4-4310-b214-6ce12ba9adad" (UID: "127ea512-daf4-4310-b214-6ce12ba9adad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.347457 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.347511 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127ea512-daf4-4310-b214-6ce12ba9adad-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.794720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"4a695858-441e-4d03-9a5a-2c336605cab7","Type":"ContainerStarted","Data":"868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2"} Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.802537 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"127ea512-daf4-4310-b214-6ce12ba9adad","Type":"ContainerDied","Data":"18d9d365af20e70d16c3ff9495618a97467cc7eed705a0ad9f41e997108cef49"} Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.802588 4760 scope.go:117] "RemoveContainer" containerID="9e7f2d50aa31fd4a62cf26a280283851d1d9b026a8dd316737a8ee7dfa7cfcae" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.802601 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.826303 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgdnh" event={"ID":"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b","Type":"ContainerStarted","Data":"c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d"} Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.867926 4760 scope.go:117] "RemoveContainer" containerID="3d0dded1d13fd4d3025916f62b73b555e265044d7a8bf6bb247b475f54988f81" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.900868 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.399381796 podStartE2EDuration="14.900842678s" podCreationTimestamp="2026-01-23 18:55:10 +0000 UTC" firstStartedPulling="2026-01-23 18:55:11.717213893 +0000 UTC m=+3254.719671826" lastFinishedPulling="2026-01-23 18:55:22.218674775 +0000 UTC m=+3265.221132708" observedRunningTime="2026-01-23 18:55:24.855310953 +0000 UTC m=+3267.857768886" watchObservedRunningTime="2026-01-23 18:55:24.900842678 +0000 UTC m=+3267.903300631" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.925146 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.932887 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.946697 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:24 crc kubenswrapper[4760]: E0123 18:55:24.947235 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="proxy-httpd" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.947762 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="proxy-httpd" Jan 23 18:55:24 crc kubenswrapper[4760]: E0123 18:55:24.948465 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" containerName="dnsmasq-dns" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.948476 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" containerName="dnsmasq-dns" Jan 23 18:55:24 crc kubenswrapper[4760]: E0123 18:55:24.948494 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="ceilometer-notification-agent" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.948502 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="ceilometer-notification-agent" Jan 23 18:55:24 crc kubenswrapper[4760]: E0123 18:55:24.948514 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="sg-core" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.948521 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="sg-core" Jan 23 18:55:24 crc kubenswrapper[4760]: E0123 18:55:24.948533 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="ceilometer-central-agent" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.948540 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="ceilometer-central-agent" Jan 23 18:55:24 crc kubenswrapper[4760]: E0123 18:55:24.948557 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" containerName="init" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.948564 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" containerName="init" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.948796 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="proxy-httpd" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.948816 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1899a1-9b41-4804-8aa1-6fc4d97a94aa" containerName="dnsmasq-dns" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.948826 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="ceilometer-central-agent" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.948838 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="ceilometer-notification-agent" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.948850 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" containerName="sg-core" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.954621 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.954724 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.958144 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.958186 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.958430 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 18:55:24 crc kubenswrapper[4760]: I0123 18:55:24.959516 4760 scope.go:117] "RemoveContainer" containerID="2c105ce41b18317f612ca83d24aac7fc32c45e35b9c011ed6097378a4c14519c" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.000325 4760 scope.go:117] "RemoveContainer" containerID="f6a4ea87cdd476e1377e6714d4de4c3a83825a3416607aad298d45c657cd6415" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.069151 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-scripts\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.069237 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.069517 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-config-data\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.069553 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.069757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-log-httpd\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.069786 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-run-httpd\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.069818 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmbvk\" (UniqueName: \"kubernetes.io/projected/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-kube-api-access-wmbvk\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.069841 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.171867 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-log-httpd\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.171913 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-run-httpd\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.171937 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmbvk\" (UniqueName: \"kubernetes.io/projected/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-kube-api-access-wmbvk\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.171957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.171992 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-scripts\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.172024 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.172084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-config-data\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.172100 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.173526 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-log-httpd\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.174525 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-run-httpd\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.178362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.179348 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-scripts\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.179747 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-config-data\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.191190 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.195940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.195955 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmbvk\" (UniqueName: \"kubernetes.io/projected/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-kube-api-access-wmbvk\") pod \"ceilometer-0\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.278241 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.424009 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.609023 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="127ea512-daf4-4310-b214-6ce12ba9adad" path="/var/lib/kubelet/pods/127ea512-daf4-4310-b214-6ce12ba9adad/volumes" Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.775283 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.832974 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6","Type":"ContainerStarted","Data":"4285f59f16972a4920a9e8e755b30727d6f533dab39a1825099bfda155080613"} Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.835863 4760 generic.go:334] "Generic (PLEG): container finished" podID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerID="c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d" exitCode=0 Jan 23 18:55:25 crc kubenswrapper[4760]: I0123 18:55:25.835920 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgdnh" event={"ID":"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b","Type":"ContainerDied","Data":"c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d"} Jan 23 18:55:27 crc kubenswrapper[4760]: W0123 18:55:27.295996 4760 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d701bfb_9ebe_4d18_8239_17237ad7b466.slice/crio-conmon-f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d701bfb_9ebe_4d18_8239_17237ad7b466.slice/crio-conmon-f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360.scope: no such file or directory Jan 23 18:55:27 crc kubenswrapper[4760]: W0123 18:55:27.296600 4760 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d701bfb_9ebe_4d18_8239_17237ad7b466.slice/crio-f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d701bfb_9ebe_4d18_8239_17237ad7b466.slice/crio-f62de38fcf20c4941455af61b3844e8edb19188b8a543ce724a059a3b3b60360.scope: no such file or directory Jan 23 18:55:27 crc kubenswrapper[4760]: W0123 18:55:27.296892 4760 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d701bfb_9ebe_4d18_8239_17237ad7b466.slice/crio-conmon-2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d701bfb_9ebe_4d18_8239_17237ad7b466.slice/crio-conmon-2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64.scope: no such file or directory Jan 23 18:55:27 crc kubenswrapper[4760]: W0123 18:55:27.297018 4760 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d701bfb_9ebe_4d18_8239_17237ad7b466.slice/crio-2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d701bfb_9ebe_4d18_8239_17237ad7b466.slice/crio-2f6d23240547a644e986fae58ba2f1f751e07f67d228c8850ae46f67c834fa64.scope: no such file or directory Jan 23 18:55:27 crc kubenswrapper[4760]: E0123 18:55:27.508189 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1899a1_9b41_4804_8aa1_6fc4d97a94aa.slice/crio-883654ece72feb9aeacc8c16ca733dfc5f1d024e6a3b7568d384d279fdae0d22\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127ea512_daf4_4310_b214_6ce12ba9adad.slice/crio-f6a4ea87cdd476e1377e6714d4de4c3a83825a3416607aad298d45c657cd6415.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1899a1_9b41_4804_8aa1_6fc4d97a94aa.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1899a1_9b41_4804_8aa1_6fc4d97a94aa.slice/crio-89994adfba93773ab056eb44bf39c12d27142bbb4da8cf5f012c950c011e6853.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1899a1_9b41_4804_8aa1_6fc4d97a94aa.slice/crio-conmon-89994adfba93773ab056eb44bf39c12d27142bbb4da8cf5f012c950c011e6853.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eb3feba_deb6_4186_8cc2_fcfbdedd6154.slice/crio-4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127ea512_daf4_4310_b214_6ce12ba9adad.slice/crio-conmon-9e7f2d50aa31fd4a62cf26a280283851d1d9b026a8dd316737a8ee7dfa7cfcae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eb3feba_deb6_4186_8cc2_fcfbdedd6154.slice/crio-conmon-4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127ea512_daf4_4310_b214_6ce12ba9adad.slice/crio-18d9d365af20e70d16c3ff9495618a97467cc7eed705a0ad9f41e997108cef49\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127ea512_daf4_4310_b214_6ce12ba9adad.slice/crio-conmon-f6a4ea87cdd476e1377e6714d4de4c3a83825a3416607aad298d45c657cd6415.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127ea512_daf4_4310_b214_6ce12ba9adad.slice/crio-9e7f2d50aa31fd4a62cf26a280283851d1d9b026a8dd316737a8ee7dfa7cfcae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127ea512_daf4_4310_b214_6ce12ba9adad.slice/crio-conmon-3d0dded1d13fd4d3025916f62b73b555e265044d7a8bf6bb247b475f54988f81.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127ea512_daf4_4310_b214_6ce12ba9adad.slice/crio-2c105ce41b18317f612ca83d24aac7fc32c45e35b9c011ed6097378a4c14519c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127ea512_daf4_4310_b214_6ce12ba9adad.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127ea512_daf4_4310_b214_6ce12ba9adad.slice/crio-conmon-2c105ce41b18317f612ca83d24aac7fc32c45e35b9c011ed6097378a4c14519c.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.805426 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.853616 4760 generic.go:334] "Generic (PLEG): container finished" podID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerID="4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1" exitCode=137 Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.853711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cd9b568d4-rhb64" event={"ID":"0eb3feba-deb6-4186-8cc2-fcfbdedd6154","Type":"ContainerDied","Data":"4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1"} Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.853760 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cd9b568d4-rhb64" event={"ID":"0eb3feba-deb6-4186-8cc2-fcfbdedd6154","Type":"ContainerDied","Data":"687361a56673a33557c7dedc77a626ffe4f4d9b5b300abdc94e9b093ed751e15"} Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.853778 4760 scope.go:117] "RemoveContainer" containerID="392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3" Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.853938 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cd9b568d4-rhb64" Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.870595 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgdnh" event={"ID":"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b","Type":"ContainerStarted","Data":"4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585"} Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.875195 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6","Type":"ContainerStarted","Data":"08971d2d0d6b1d60e28f1bd953dcb6ac4bb3d3df81581d4c4b6c0008758b9488"} Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.897494 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zgdnh" podStartSLOduration=6.8210086279999995 podStartE2EDuration="9.89747074s" podCreationTimestamp="2026-01-23 18:55:18 +0000 UTC" firstStartedPulling="2026-01-23 18:55:23.77866771 +0000 UTC m=+3266.781125643" lastFinishedPulling="2026-01-23 18:55:26.855129822 +0000 UTC m=+3269.857587755" observedRunningTime="2026-01-23 18:55:27.891614059 +0000 UTC m=+3270.894071992" watchObservedRunningTime="2026-01-23 18:55:27.89747074 +0000 UTC m=+3270.899928673" Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.950953 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-secret-key\") pod \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.951059 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-tls-certs\") pod \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.951089 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-combined-ca-bundle\") pod \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.951121 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rphc\" (UniqueName: \"kubernetes.io/projected/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-kube-api-access-8rphc\") pod \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.951183 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-config-data\") pod \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.951263 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-scripts\") pod \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.951337 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-logs\") pod \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\" (UID: \"0eb3feba-deb6-4186-8cc2-fcfbdedd6154\") " Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.952520 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-logs" (OuterVolumeSpecName: "logs") pod "0eb3feba-deb6-4186-8cc2-fcfbdedd6154" (UID: "0eb3feba-deb6-4186-8cc2-fcfbdedd6154"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.958812 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-kube-api-access-8rphc" (OuterVolumeSpecName: "kube-api-access-8rphc") pod "0eb3feba-deb6-4186-8cc2-fcfbdedd6154" (UID: "0eb3feba-deb6-4186-8cc2-fcfbdedd6154"). InnerVolumeSpecName "kube-api-access-8rphc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.962963 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0eb3feba-deb6-4186-8cc2-fcfbdedd6154" (UID: "0eb3feba-deb6-4186-8cc2-fcfbdedd6154"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.983731 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-scripts" (OuterVolumeSpecName: "scripts") pod "0eb3feba-deb6-4186-8cc2-fcfbdedd6154" (UID: "0eb3feba-deb6-4186-8cc2-fcfbdedd6154"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:55:27 crc kubenswrapper[4760]: I0123 18:55:27.989199 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0eb3feba-deb6-4186-8cc2-fcfbdedd6154" (UID: "0eb3feba-deb6-4186-8cc2-fcfbdedd6154"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.017617 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "0eb3feba-deb6-4186-8cc2-fcfbdedd6154" (UID: "0eb3feba-deb6-4186-8cc2-fcfbdedd6154"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.022591 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-config-data" (OuterVolumeSpecName: "config-data") pod "0eb3feba-deb6-4186-8cc2-fcfbdedd6154" (UID: "0eb3feba-deb6-4186-8cc2-fcfbdedd6154"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.053671 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.053720 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.053731 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.053739 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.053751 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.053759 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.053768 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rphc\" (UniqueName: \"kubernetes.io/projected/0eb3feba-deb6-4186-8cc2-fcfbdedd6154-kube-api-access-8rphc\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.090005 4760 scope.go:117] "RemoveContainer" containerID="4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.113535 4760 scope.go:117] "RemoveContainer" containerID="392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3" Jan 23 18:55:28 crc kubenswrapper[4760]: E0123 18:55:28.114262 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3\": container with ID starting with 392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3 not found: ID does not exist" containerID="392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.114310 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3"} err="failed to get container status \"392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3\": rpc error: code = NotFound desc = could not find container \"392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3\": container with ID starting with 392d09919b09b4540ad6a07bb6c9eb26b1d393688a1deb2a38b75418f36f19c3 not found: ID does not exist" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.114346 4760 scope.go:117] "RemoveContainer" containerID="4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1" Jan 23 18:55:28 crc kubenswrapper[4760]: E0123 18:55:28.114754 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1\": container with ID starting with 4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1 not found: ID does not exist" containerID="4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.114801 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1"} err="failed to get container status \"4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1\": rpc error: code = NotFound desc = could not find container \"4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1\": container with ID starting with 4cad5b972b656674f83ae398ea12d96093f3d100b09dfdd5835da6b544a5fad1 not found: ID does not exist" Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.223574 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cd9b568d4-rhb64"] Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.233390 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cd9b568d4-rhb64"] Jan 23 18:55:28 crc kubenswrapper[4760]: I0123 18:55:28.886417 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6","Type":"ContainerStarted","Data":"59a203e96e41013107bad19dee77d8e041d1ab7d1544dfc680a5d3d7061dc5a5"} Jan 23 18:55:29 crc kubenswrapper[4760]: I0123 18:55:29.408449 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:29 crc kubenswrapper[4760]: I0123 18:55:29.408484 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:29 crc kubenswrapper[4760]: I0123 18:55:29.459755 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:29 crc kubenswrapper[4760]: I0123 18:55:29.611207 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" path="/var/lib/kubelet/pods/0eb3feba-deb6-4186-8cc2-fcfbdedd6154/volumes" Jan 23 18:55:29 crc kubenswrapper[4760]: I0123 18:55:29.906452 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6","Type":"ContainerStarted","Data":"71375fc65809656086945ec3169a5457f338b8cfcbe0c33e034b13523cd6b32f"} Jan 23 18:55:30 crc kubenswrapper[4760]: I0123 18:55:30.917367 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6","Type":"ContainerStarted","Data":"30dab61dfd4ebcef352ebbe36b6dec347a12dd7698f910db5f8341e4a997d696"} Jan 23 18:55:30 crc kubenswrapper[4760]: I0123 18:55:30.917530 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="ceilometer-central-agent" containerID="cri-o://08971d2d0d6b1d60e28f1bd953dcb6ac4bb3d3df81581d4c4b6c0008758b9488" gracePeriod=30 Jan 23 18:55:30 crc kubenswrapper[4760]: I0123 18:55:30.917850 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="proxy-httpd" containerID="cri-o://30dab61dfd4ebcef352ebbe36b6dec347a12dd7698f910db5f8341e4a997d696" gracePeriod=30 Jan 23 18:55:30 crc kubenswrapper[4760]: I0123 18:55:30.917941 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="sg-core" containerID="cri-o://71375fc65809656086945ec3169a5457f338b8cfcbe0c33e034b13523cd6b32f" gracePeriod=30 Jan 23 18:55:30 crc kubenswrapper[4760]: I0123 18:55:30.917985 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:55:30 crc kubenswrapper[4760]: I0123 18:55:30.918004 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="ceilometer-notification-agent" containerID="cri-o://59a203e96e41013107bad19dee77d8e041d1ab7d1544dfc680a5d3d7061dc5a5" gracePeriod=30 Jan 23 18:55:30 crc kubenswrapper[4760]: I0123 18:55:30.954830 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.29477584 podStartE2EDuration="6.954806316s" podCreationTimestamp="2026-01-23 18:55:24 +0000 UTC" firstStartedPulling="2026-01-23 18:55:25.780465844 +0000 UTC m=+3268.782923777" lastFinishedPulling="2026-01-23 18:55:30.44049632 +0000 UTC m=+3273.442954253" observedRunningTime="2026-01-23 18:55:30.944231774 +0000 UTC m=+3273.946689707" watchObservedRunningTime="2026-01-23 18:55:30.954806316 +0000 UTC m=+3273.957264249" Jan 23 18:55:31 crc kubenswrapper[4760]: I0123 18:55:31.078690 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 23 18:55:31 crc kubenswrapper[4760]: I0123 18:55:31.929140 4760 generic.go:334] "Generic (PLEG): container finished" podID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerID="30dab61dfd4ebcef352ebbe36b6dec347a12dd7698f910db5f8341e4a997d696" exitCode=0 Jan 23 18:55:31 crc kubenswrapper[4760]: I0123 18:55:31.929188 4760 generic.go:334] "Generic (PLEG): container finished" podID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerID="71375fc65809656086945ec3169a5457f338b8cfcbe0c33e034b13523cd6b32f" exitCode=2 Jan 23 18:55:31 crc kubenswrapper[4760]: I0123 18:55:31.929199 4760 generic.go:334] "Generic (PLEG): container finished" podID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerID="59a203e96e41013107bad19dee77d8e041d1ab7d1544dfc680a5d3d7061dc5a5" exitCode=0 Jan 23 18:55:31 crc kubenswrapper[4760]: I0123 18:55:31.929229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6","Type":"ContainerDied","Data":"30dab61dfd4ebcef352ebbe36b6dec347a12dd7698f910db5f8341e4a997d696"} Jan 23 18:55:31 crc kubenswrapper[4760]: I0123 18:55:31.929278 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6","Type":"ContainerDied","Data":"71375fc65809656086945ec3169a5457f338b8cfcbe0c33e034b13523cd6b32f"} Jan 23 18:55:31 crc kubenswrapper[4760]: I0123 18:55:31.929293 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6","Type":"ContainerDied","Data":"59a203e96e41013107bad19dee77d8e041d1ab7d1544dfc680a5d3d7061dc5a5"} Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:32.851200 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:32.924716 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:32.942058 4760 generic.go:334] "Generic (PLEG): container finished" podID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerID="08971d2d0d6b1d60e28f1bd953dcb6ac4bb3d3df81581d4c4b6c0008758b9488" exitCode=0 Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:32.942167 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6","Type":"ContainerDied","Data":"08971d2d0d6b1d60e28f1bd953dcb6ac4bb3d3df81581d4c4b6c0008758b9488"} Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:32.942222 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6","Type":"ContainerDied","Data":"4285f59f16972a4920a9e8e755b30727d6f533dab39a1825099bfda155080613"} Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:32.942238 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4285f59f16972a4920a9e8e755b30727d6f533dab39a1825099bfda155080613" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:32.942314 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="5be053f1-728e-4cfa-bb8f-0994be12d30b" containerName="manila-scheduler" containerID="cri-o://9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125" gracePeriod=30 Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:32.942485 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="5be053f1-728e-4cfa-bb8f-0994be12d30b" containerName="probe" containerID="cri-o://d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15" gracePeriod=30 Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.003321 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.063174 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-run-httpd\") pod \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.063357 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-scripts\") pod \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.063395 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-combined-ca-bundle\") pod \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.063478 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-log-httpd\") pod \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.063498 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-config-data\") pod \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.063538 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-ceilometer-tls-certs\") pod \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.063564 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-sg-core-conf-yaml\") pod \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.063595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmbvk\" (UniqueName: \"kubernetes.io/projected/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-kube-api-access-wmbvk\") pod \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\" (UID: \"a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6\") " Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.064686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" (UID: "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.065227 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.065659 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" (UID: "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.079149 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-kube-api-access-wmbvk" (OuterVolumeSpecName: "kube-api-access-wmbvk") pod "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" (UID: "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6"). InnerVolumeSpecName "kube-api-access-wmbvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.079658 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-scripts" (OuterVolumeSpecName: "scripts") pod "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" (UID: "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.130940 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" (UID: "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.145784 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" (UID: "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.153147 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" (UID: "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.167256 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.167291 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmbvk\" (UniqueName: \"kubernetes.io/projected/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-kube-api-access-wmbvk\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.167307 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.167320 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.167332 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.167345 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.179425 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-config-data" (OuterVolumeSpecName: "config-data") pod "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" (UID: "a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.268927 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.952741 4760 generic.go:334] "Generic (PLEG): container finished" podID="5be053f1-728e-4cfa-bb8f-0994be12d30b" containerID="d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15" exitCode=0 Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.952802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5be053f1-728e-4cfa-bb8f-0994be12d30b","Type":"ContainerDied","Data":"d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15"} Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.953122 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.987650 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:33 crc kubenswrapper[4760]: I0123 18:55:33.997140 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.015828 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:34 crc kubenswrapper[4760]: E0123 18:55:34.016263 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon-log" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016280 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon-log" Jan 23 18:55:34 crc kubenswrapper[4760]: E0123 18:55:34.016299 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="ceilometer-central-agent" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016307 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="ceilometer-central-agent" Jan 23 18:55:34 crc kubenswrapper[4760]: E0123 18:55:34.016318 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="proxy-httpd" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016324 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="proxy-httpd" Jan 23 18:55:34 crc kubenswrapper[4760]: E0123 18:55:34.016339 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="ceilometer-notification-agent" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016347 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="ceilometer-notification-agent" Jan 23 18:55:34 crc kubenswrapper[4760]: E0123 18:55:34.016365 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="sg-core" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016373 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="sg-core" Jan 23 18:55:34 crc kubenswrapper[4760]: E0123 18:55:34.016391 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016397 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016580 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon-log" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016594 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="sg-core" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016603 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="ceilometer-central-agent" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016611 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="proxy-httpd" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016618 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb3feba-deb6-4186-8cc2-fcfbdedd6154" containerName="horizon" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.016627 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" containerName="ceilometer-notification-agent" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.018353 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.020637 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.020891 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.021322 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.028700 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.083883 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.083936 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f870c7cc-bcbe-4101-9a86-8a190e20cef2-log-httpd\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.084054 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.084106 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-scripts\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.084348 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6pmj\" (UniqueName: \"kubernetes.io/projected/f870c7cc-bcbe-4101-9a86-8a190e20cef2-kube-api-access-z6pmj\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.084442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.084597 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f870c7cc-bcbe-4101-9a86-8a190e20cef2-run-httpd\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.084681 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-config-data\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.186151 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6pmj\" (UniqueName: \"kubernetes.io/projected/f870c7cc-bcbe-4101-9a86-8a190e20cef2-kube-api-access-z6pmj\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.186199 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.186265 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f870c7cc-bcbe-4101-9a86-8a190e20cef2-run-httpd\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.186310 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-config-data\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.186360 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.186390 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f870c7cc-bcbe-4101-9a86-8a190e20cef2-log-httpd\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.186469 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.186498 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-scripts\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.186928 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f870c7cc-bcbe-4101-9a86-8a190e20cef2-run-httpd\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.187456 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f870c7cc-bcbe-4101-9a86-8a190e20cef2-log-httpd\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.190779 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-config-data\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.196572 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.202184 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-scripts\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.202556 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.209440 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f870c7cc-bcbe-4101-9a86-8a190e20cef2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.211991 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6pmj\" (UniqueName: \"kubernetes.io/projected/f870c7cc-bcbe-4101-9a86-8a190e20cef2-kube-api-access-z6pmj\") pod \"ceilometer-0\" (UID: \"f870c7cc-bcbe-4101-9a86-8a190e20cef2\") " pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.350898 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.594083 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.594933 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:55:34 crc kubenswrapper[4760]: E0123 18:55:34.595170 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.698086 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-combined-ca-bundle\") pod \"5be053f1-728e-4cfa-bb8f-0994be12d30b\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.698220 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5be053f1-728e-4cfa-bb8f-0994be12d30b-etc-machine-id\") pod \"5be053f1-728e-4cfa-bb8f-0994be12d30b\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.698380 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5be053f1-728e-4cfa-bb8f-0994be12d30b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5be053f1-728e-4cfa-bb8f-0994be12d30b" (UID: "5be053f1-728e-4cfa-bb8f-0994be12d30b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.698486 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swxgs\" (UniqueName: \"kubernetes.io/projected/5be053f1-728e-4cfa-bb8f-0994be12d30b-kube-api-access-swxgs\") pod \"5be053f1-728e-4cfa-bb8f-0994be12d30b\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.698772 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data\") pod \"5be053f1-728e-4cfa-bb8f-0994be12d30b\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.698902 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-scripts\") pod \"5be053f1-728e-4cfa-bb8f-0994be12d30b\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.698940 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data-custom\") pod \"5be053f1-728e-4cfa-bb8f-0994be12d30b\" (UID: \"5be053f1-728e-4cfa-bb8f-0994be12d30b\") " Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.701098 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5be053f1-728e-4cfa-bb8f-0994be12d30b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.704363 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-scripts" (OuterVolumeSpecName: "scripts") pod "5be053f1-728e-4cfa-bb8f-0994be12d30b" (UID: "5be053f1-728e-4cfa-bb8f-0994be12d30b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.705393 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be053f1-728e-4cfa-bb8f-0994be12d30b-kube-api-access-swxgs" (OuterVolumeSpecName: "kube-api-access-swxgs") pod "5be053f1-728e-4cfa-bb8f-0994be12d30b" (UID: "5be053f1-728e-4cfa-bb8f-0994be12d30b"). InnerVolumeSpecName "kube-api-access-swxgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.706772 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5be053f1-728e-4cfa-bb8f-0994be12d30b" (UID: "5be053f1-728e-4cfa-bb8f-0994be12d30b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.772499 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5be053f1-728e-4cfa-bb8f-0994be12d30b" (UID: "5be053f1-728e-4cfa-bb8f-0994be12d30b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.803376 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.803686 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swxgs\" (UniqueName: \"kubernetes.io/projected/5be053f1-728e-4cfa-bb8f-0994be12d30b-kube-api-access-swxgs\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.803765 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.803864 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.818806 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data" (OuterVolumeSpecName: "config-data") pod "5be053f1-728e-4cfa-bb8f-0994be12d30b" (UID: "5be053f1-728e-4cfa-bb8f-0994be12d30b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.906421 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5be053f1-728e-4cfa-bb8f-0994be12d30b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.929249 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.964239 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f870c7cc-bcbe-4101-9a86-8a190e20cef2","Type":"ContainerStarted","Data":"2be403ca909b83ed0df3120eac986b812ad47b7c3aa602d915cc6c5b05ff0fd0"} Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.968280 4760 generic.go:334] "Generic (PLEG): container finished" podID="5be053f1-728e-4cfa-bb8f-0994be12d30b" containerID="9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125" exitCode=0 Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.968360 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5be053f1-728e-4cfa-bb8f-0994be12d30b","Type":"ContainerDied","Data":"9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125"} Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.968388 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5be053f1-728e-4cfa-bb8f-0994be12d30b","Type":"ContainerDied","Data":"d84af4d38258c1bc2a1a309a302d5447eb7127b3a039c15ca5c016b1801d5e06"} Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.968462 4760 scope.go:117] "RemoveContainer" containerID="d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.968759 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 18:55:34 crc kubenswrapper[4760]: I0123 18:55:34.998894 4760 scope.go:117] "RemoveContainer" containerID="9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.014597 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.026963 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.035830 4760 scope.go:117] "RemoveContainer" containerID="d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15" Jan 23 18:55:35 crc kubenswrapper[4760]: E0123 18:55:35.036454 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15\": container with ID starting with d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15 not found: ID does not exist" containerID="d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.036514 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15"} err="failed to get container status \"d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15\": rpc error: code = NotFound desc = could not find container \"d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15\": container with ID starting with d72cab54857f3604f5372b55692b4a0cfa847c316d3b790bea35d7cffc8fcc15 not found: ID does not exist" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.036555 4760 scope.go:117] "RemoveContainer" containerID="9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125" Jan 23 18:55:35 crc kubenswrapper[4760]: E0123 18:55:35.039863 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125\": container with ID starting with 9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125 not found: ID does not exist" containerID="9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.039921 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125"} err="failed to get container status \"9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125\": rpc error: code = NotFound desc = could not find container \"9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125\": container with ID starting with 9f83805a280d265be5b2c04578183a97e4af79902f7196987105d77321312125 not found: ID does not exist" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.044036 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 18:55:35 crc kubenswrapper[4760]: E0123 18:55:35.044816 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be053f1-728e-4cfa-bb8f-0994be12d30b" containerName="probe" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.044847 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be053f1-728e-4cfa-bb8f-0994be12d30b" containerName="probe" Jan 23 18:55:35 crc kubenswrapper[4760]: E0123 18:55:35.044865 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be053f1-728e-4cfa-bb8f-0994be12d30b" containerName="manila-scheduler" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.044878 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be053f1-728e-4cfa-bb8f-0994be12d30b" containerName="manila-scheduler" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.045138 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be053f1-728e-4cfa-bb8f-0994be12d30b" containerName="probe" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.045175 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be053f1-728e-4cfa-bb8f-0994be12d30b" containerName="manila-scheduler" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.048382 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.051297 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.058913 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.110120 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-869p6\" (UniqueName: \"kubernetes.io/projected/d8d48def-f1d3-47de-9724-65f5d7f0d47a-kube-api-access-869p6\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.110189 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8d48def-f1d3-47de-9724-65f5d7f0d47a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.110242 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.110349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.110443 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-config-data\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.110682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-scripts\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.212341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.212422 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-config-data\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.212465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-scripts\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.212560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-869p6\" (UniqueName: \"kubernetes.io/projected/d8d48def-f1d3-47de-9724-65f5d7f0d47a-kube-api-access-869p6\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.212579 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8d48def-f1d3-47de-9724-65f5d7f0d47a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.212603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.212933 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8d48def-f1d3-47de-9724-65f5d7f0d47a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.216948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-scripts\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.217460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-config-data\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.217553 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.218037 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8d48def-f1d3-47de-9724-65f5d7f0d47a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.228126 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-869p6\" (UniqueName: \"kubernetes.io/projected/d8d48def-f1d3-47de-9724-65f5d7f0d47a-kube-api-access-869p6\") pod \"manila-scheduler-0\" (UID: \"d8d48def-f1d3-47de-9724-65f5d7f0d47a\") " pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.369588 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.610095 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be053f1-728e-4cfa-bb8f-0994be12d30b" path="/var/lib/kubelet/pods/5be053f1-728e-4cfa-bb8f-0994be12d30b/volumes" Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.611453 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6" path="/var/lib/kubelet/pods/a98c5002-98b8-4c1d-8ba6-fbf8b38ec4c6/volumes" Jan 23 18:55:35 crc kubenswrapper[4760]: W0123 18:55:35.810960 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8d48def_f1d3_47de_9724_65f5d7f0d47a.slice/crio-b53b2e5877b0e12d95fa2e2704c7f7dde99d9b85a27d02bd15136d72c48392a8 WatchSource:0}: Error finding container b53b2e5877b0e12d95fa2e2704c7f7dde99d9b85a27d02bd15136d72c48392a8: Status 404 returned error can't find the container with id b53b2e5877b0e12d95fa2e2704c7f7dde99d9b85a27d02bd15136d72c48392a8 Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.815810 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.979179 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f870c7cc-bcbe-4101-9a86-8a190e20cef2","Type":"ContainerStarted","Data":"f6a02b73c273bcf61417bd8e325948ad4a4959a880f718a25414d4aecf9f5763"} Jan 23 18:55:35 crc kubenswrapper[4760]: I0123 18:55:35.981688 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d8d48def-f1d3-47de-9724-65f5d7f0d47a","Type":"ContainerStarted","Data":"b53b2e5877b0e12d95fa2e2704c7f7dde99d9b85a27d02bd15136d72c48392a8"} Jan 23 18:55:36 crc kubenswrapper[4760]: I0123 18:55:36.993002 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d8d48def-f1d3-47de-9724-65f5d7f0d47a","Type":"ContainerStarted","Data":"15406a88489f33fb8e85b7abed24d77fcce39cbf45a51ee061da0493df5eb57e"} Jan 23 18:55:36 crc kubenswrapper[4760]: I0123 18:55:36.993621 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d8d48def-f1d3-47de-9724-65f5d7f0d47a","Type":"ContainerStarted","Data":"48db26edfeb31e6f8c4ec7f88b9d02e85c5166a776a8c2d33766da1aaa8d5a9d"} Jan 23 18:55:36 crc kubenswrapper[4760]: I0123 18:55:36.997669 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f870c7cc-bcbe-4101-9a86-8a190e20cef2","Type":"ContainerStarted","Data":"80bc725dfe8502dfa677acd39da43da11ece6d048d15850fcdb31cf284c2391f"} Jan 23 18:55:36 crc kubenswrapper[4760]: I0123 18:55:36.997713 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f870c7cc-bcbe-4101-9a86-8a190e20cef2","Type":"ContainerStarted","Data":"4f720269f1d76b532276cd257e932864692e04383703d645e00214b4aecbaeef"} Jan 23 18:55:37 crc kubenswrapper[4760]: I0123 18:55:37.018384 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.018366628 podStartE2EDuration="2.018366628s" podCreationTimestamp="2026-01-23 18:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:55:37.014539333 +0000 UTC m=+3280.016997266" watchObservedRunningTime="2026-01-23 18:55:37.018366628 +0000 UTC m=+3280.020824581" Jan 23 18:55:38 crc kubenswrapper[4760]: I0123 18:55:38.738909 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 23 18:55:39 crc kubenswrapper[4760]: I0123 18:55:39.482120 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:39 crc kubenswrapper[4760]: I0123 18:55:39.550255 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgdnh"] Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.029942 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f870c7cc-bcbe-4101-9a86-8a190e20cef2","Type":"ContainerStarted","Data":"bfb037e3716d669b28b15da51174cc25c5824533515f8157367e8865d3a7a8dc"} Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.030023 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.030782 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zgdnh" podUID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerName="registry-server" containerID="cri-o://4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585" gracePeriod=2 Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.050071 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.099362326 podStartE2EDuration="7.050050516s" podCreationTimestamp="2026-01-23 18:55:33 +0000 UTC" firstStartedPulling="2026-01-23 18:55:34.933857736 +0000 UTC m=+3277.936315669" lastFinishedPulling="2026-01-23 18:55:38.884545926 +0000 UTC m=+3281.887003859" observedRunningTime="2026-01-23 18:55:40.048504084 +0000 UTC m=+3283.050962017" watchObservedRunningTime="2026-01-23 18:55:40.050050516 +0000 UTC m=+3283.052508459" Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.522748 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.635055 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv9f5\" (UniqueName: \"kubernetes.io/projected/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-kube-api-access-rv9f5\") pod \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.635506 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-utilities\") pod \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.635627 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-catalog-content\") pod \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\" (UID: \"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b\") " Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.636595 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-utilities" (OuterVolumeSpecName: "utilities") pod "e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" (UID: "e00e9dbf-07d7-4f11-b1df-8b77457a3d5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.642445 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-kube-api-access-rv9f5" (OuterVolumeSpecName: "kube-api-access-rv9f5") pod "e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" (UID: "e00e9dbf-07d7-4f11-b1df-8b77457a3d5b"). InnerVolumeSpecName "kube-api-access-rv9f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.655839 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" (UID: "e00e9dbf-07d7-4f11-b1df-8b77457a3d5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.738324 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv9f5\" (UniqueName: \"kubernetes.io/projected/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-kube-api-access-rv9f5\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.738377 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:40 crc kubenswrapper[4760]: I0123 18:55:40.738389 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.047267 4760 generic.go:334] "Generic (PLEG): container finished" podID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerID="4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585" exitCode=0 Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.047320 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgdnh" event={"ID":"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b","Type":"ContainerDied","Data":"4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585"} Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.047334 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgdnh" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.047353 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgdnh" event={"ID":"e00e9dbf-07d7-4f11-b1df-8b77457a3d5b","Type":"ContainerDied","Data":"54df908dda044ffdc9a8f9425cede2dc42677d59a045d7eb5e490f24f4e25e37"} Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.047369 4760 scope.go:117] "RemoveContainer" containerID="4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.086500 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgdnh"] Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.093455 4760 scope.go:117] "RemoveContainer" containerID="c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.106738 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgdnh"] Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.125276 4760 scope.go:117] "RemoveContainer" containerID="98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.170603 4760 scope.go:117] "RemoveContainer" containerID="4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585" Jan 23 18:55:41 crc kubenswrapper[4760]: E0123 18:55:41.175946 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585\": container with ID starting with 4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585 not found: ID does not exist" containerID="4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.176042 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585"} err="failed to get container status \"4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585\": rpc error: code = NotFound desc = could not find container \"4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585\": container with ID starting with 4dc0bc2a50451d69b9d855b1e7371b083571fc83440bb5dc1096c50bbc21b585 not found: ID does not exist" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.176103 4760 scope.go:117] "RemoveContainer" containerID="c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d" Jan 23 18:55:41 crc kubenswrapper[4760]: E0123 18:55:41.176593 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d\": container with ID starting with c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d not found: ID does not exist" containerID="c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.176645 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d"} err="failed to get container status \"c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d\": rpc error: code = NotFound desc = could not find container \"c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d\": container with ID starting with c16d57c331c74f7636b56af573c4bba885f04b32d5b77cdb08c9fcb8866c044d not found: ID does not exist" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.176671 4760 scope.go:117] "RemoveContainer" containerID="98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8" Jan 23 18:55:41 crc kubenswrapper[4760]: E0123 18:55:41.177013 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8\": container with ID starting with 98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8 not found: ID does not exist" containerID="98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.177058 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8"} err="failed to get container status \"98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8\": rpc error: code = NotFound desc = could not find container \"98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8\": container with ID starting with 98f5a04896cc6c0856ddee5c8b738fc11d4eae5e3e7a09232931dbbfd5bbe3d8 not found: ID does not exist" Jan 23 18:55:41 crc kubenswrapper[4760]: I0123 18:55:41.610574 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" path="/var/lib/kubelet/pods/e00e9dbf-07d7-4f11-b1df-8b77457a3d5b/volumes" Jan 23 18:55:42 crc kubenswrapper[4760]: I0123 18:55:42.794862 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 23 18:55:42 crc kubenswrapper[4760]: I0123 18:55:42.860965 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 18:55:43 crc kubenswrapper[4760]: I0123 18:55:43.064124 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="4a695858-441e-4d03-9a5a-2c336605cab7" containerName="probe" containerID="cri-o://868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2" gracePeriod=30 Jan 23 18:55:43 crc kubenswrapper[4760]: I0123 18:55:43.064607 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="4a695858-441e-4d03-9a5a-2c336605cab7" containerName="manila-share" containerID="cri-o://5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea" gracePeriod=30 Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.036819 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.082461 4760 generic.go:334] "Generic (PLEG): container finished" podID="4a695858-441e-4d03-9a5a-2c336605cab7" containerID="868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2" exitCode=0 Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.082797 4760 generic.go:334] "Generic (PLEG): container finished" podID="4a695858-441e-4d03-9a5a-2c336605cab7" containerID="5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea" exitCode=1 Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.082609 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.082557 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"4a695858-441e-4d03-9a5a-2c336605cab7","Type":"ContainerDied","Data":"868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2"} Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.082965 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"4a695858-441e-4d03-9a5a-2c336605cab7","Type":"ContainerDied","Data":"5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea"} Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.082998 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"4a695858-441e-4d03-9a5a-2c336605cab7","Type":"ContainerDied","Data":"7684db9a2447a3e472192785f6787862efda9aba73eac668187becc4cc20876e"} Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.083029 4760 scope.go:117] "RemoveContainer" containerID="868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.113437 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-etc-machine-id\") pod \"4a695858-441e-4d03-9a5a-2c336605cab7\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.113562 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data\") pod \"4a695858-441e-4d03-9a5a-2c336605cab7\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.113614 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-combined-ca-bundle\") pod \"4a695858-441e-4d03-9a5a-2c336605cab7\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.113646 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-scripts\") pod \"4a695858-441e-4d03-9a5a-2c336605cab7\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.113643 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4a695858-441e-4d03-9a5a-2c336605cab7" (UID: "4a695858-441e-4d03-9a5a-2c336605cab7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.113709 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-var-lib-manila\") pod \"4a695858-441e-4d03-9a5a-2c336605cab7\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.113748 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data-custom\") pod \"4a695858-441e-4d03-9a5a-2c336605cab7\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.113781 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-ceph\") pod \"4a695858-441e-4d03-9a5a-2c336605cab7\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.113846 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw8k6\" (UniqueName: \"kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-kube-api-access-kw8k6\") pod \"4a695858-441e-4d03-9a5a-2c336605cab7\" (UID: \"4a695858-441e-4d03-9a5a-2c336605cab7\") " Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.114262 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.115378 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "4a695858-441e-4d03-9a5a-2c336605cab7" (UID: "4a695858-441e-4d03-9a5a-2c336605cab7"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.118239 4760 scope.go:117] "RemoveContainer" containerID="5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.119345 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-ceph" (OuterVolumeSpecName: "ceph") pod "4a695858-441e-4d03-9a5a-2c336605cab7" (UID: "4a695858-441e-4d03-9a5a-2c336605cab7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.123489 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-scripts" (OuterVolumeSpecName: "scripts") pod "4a695858-441e-4d03-9a5a-2c336605cab7" (UID: "4a695858-441e-4d03-9a5a-2c336605cab7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.126078 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4a695858-441e-4d03-9a5a-2c336605cab7" (UID: "4a695858-441e-4d03-9a5a-2c336605cab7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.148936 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-kube-api-access-kw8k6" (OuterVolumeSpecName: "kube-api-access-kw8k6") pod "4a695858-441e-4d03-9a5a-2c336605cab7" (UID: "4a695858-441e-4d03-9a5a-2c336605cab7"). InnerVolumeSpecName "kube-api-access-kw8k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.164226 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a695858-441e-4d03-9a5a-2c336605cab7" (UID: "4a695858-441e-4d03-9a5a-2c336605cab7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.191761 4760 scope.go:117] "RemoveContainer" containerID="868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2" Jan 23 18:55:44 crc kubenswrapper[4760]: E0123 18:55:44.192122 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2\": container with ID starting with 868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2 not found: ID does not exist" containerID="868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.192155 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2"} err="failed to get container status \"868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2\": rpc error: code = NotFound desc = could not find container \"868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2\": container with ID starting with 868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2 not found: ID does not exist" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.192173 4760 scope.go:117] "RemoveContainer" containerID="5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea" Jan 23 18:55:44 crc kubenswrapper[4760]: E0123 18:55:44.192517 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea\": container with ID starting with 5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea not found: ID does not exist" containerID="5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.192541 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea"} err="failed to get container status \"5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea\": rpc error: code = NotFound desc = could not find container \"5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea\": container with ID starting with 5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea not found: ID does not exist" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.192558 4760 scope.go:117] "RemoveContainer" containerID="868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.192780 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2"} err="failed to get container status \"868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2\": rpc error: code = NotFound desc = could not find container \"868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2\": container with ID starting with 868179c3ce84c7f7cb785b2ad931902ee58eb54d568aa229827089a8d712c7f2 not found: ID does not exist" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.192809 4760 scope.go:117] "RemoveContainer" containerID="5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.193016 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea"} err="failed to get container status \"5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea\": rpc error: code = NotFound desc = could not find container \"5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea\": container with ID starting with 5b5030571b4c1758132463f246faeef68ece1fb02cc0283a575320cf5c0cafea not found: ID does not exist" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.215809 4760 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/4a695858-441e-4d03-9a5a-2c336605cab7-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.215849 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.215864 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.215875 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw8k6\" (UniqueName: \"kubernetes.io/projected/4a695858-441e-4d03-9a5a-2c336605cab7-kube-api-access-kw8k6\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.215891 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.215902 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.237503 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data" (OuterVolumeSpecName: "config-data") pod "4a695858-441e-4d03-9a5a-2c336605cab7" (UID: "4a695858-441e-4d03-9a5a-2c336605cab7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.317929 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a695858-441e-4d03-9a5a-2c336605cab7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.417487 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.431767 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.444324 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 18:55:44 crc kubenswrapper[4760]: E0123 18:55:44.444775 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerName="extract-utilities" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.444798 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerName="extract-utilities" Jan 23 18:55:44 crc kubenswrapper[4760]: E0123 18:55:44.444813 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerName="extract-content" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.444820 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerName="extract-content" Jan 23 18:55:44 crc kubenswrapper[4760]: E0123 18:55:44.444859 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerName="registry-server" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.444869 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerName="registry-server" Jan 23 18:55:44 crc kubenswrapper[4760]: E0123 18:55:44.444884 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a695858-441e-4d03-9a5a-2c336605cab7" containerName="probe" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.444893 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a695858-441e-4d03-9a5a-2c336605cab7" containerName="probe" Jan 23 18:55:44 crc kubenswrapper[4760]: E0123 18:55:44.444910 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a695858-441e-4d03-9a5a-2c336605cab7" containerName="manila-share" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.444917 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a695858-441e-4d03-9a5a-2c336605cab7" containerName="manila-share" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.445127 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a695858-441e-4d03-9a5a-2c336605cab7" containerName="manila-share" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.445162 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a695858-441e-4d03-9a5a-2c336605cab7" containerName="probe" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.445181 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00e9dbf-07d7-4f11-b1df-8b77457a3d5b" containerName="registry-server" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.446428 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.449329 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.462173 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.522305 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-scripts\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.522456 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-ceph\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.522536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.522670 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prkvg\" (UniqueName: \"kubernetes.io/projected/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-kube-api-access-prkvg\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.522758 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.522785 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.522859 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.522969 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-config-data\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.625724 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prkvg\" (UniqueName: \"kubernetes.io/projected/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-kube-api-access-prkvg\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.625843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.625883 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.625913 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.625968 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-config-data\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.626051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-scripts\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.626096 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-ceph\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.626137 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.626301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.627027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.633251 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-scripts\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.633330 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.633345 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.633947 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-ceph\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.634759 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-config-data\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.643898 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prkvg\" (UniqueName: \"kubernetes.io/projected/412f9ad2-6b73-4af0-bd6e-66a697eb20ba-kube-api-access-prkvg\") pod \"manila-share-share1-0\" (UID: \"412f9ad2-6b73-4af0-bd6e-66a697eb20ba\") " pod="openstack/manila-share-share1-0" Jan 23 18:55:44 crc kubenswrapper[4760]: I0123 18:55:44.768559 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 18:55:45 crc kubenswrapper[4760]: I0123 18:55:45.358718 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 18:55:45 crc kubenswrapper[4760]: I0123 18:55:45.370256 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 23 18:55:45 crc kubenswrapper[4760]: I0123 18:55:45.608736 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a695858-441e-4d03-9a5a-2c336605cab7" path="/var/lib/kubelet/pods/4a695858-441e-4d03-9a5a-2c336605cab7/volumes" Jan 23 18:55:46 crc kubenswrapper[4760]: I0123 18:55:46.103596 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"412f9ad2-6b73-4af0-bd6e-66a697eb20ba","Type":"ContainerStarted","Data":"511f3c5e2502d5939e52562726bbe6b9d7cfb48c99835c40980cd6c9ef4e950e"} Jan 23 18:55:46 crc kubenswrapper[4760]: I0123 18:55:46.104118 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"412f9ad2-6b73-4af0-bd6e-66a697eb20ba","Type":"ContainerStarted","Data":"4dbd018d7e0d6fa47343454e30de6c12643f9009dc9387d318336906c5e7dddf"} Jan 23 18:55:47 crc kubenswrapper[4760]: I0123 18:55:47.114549 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"412f9ad2-6b73-4af0-bd6e-66a697eb20ba","Type":"ContainerStarted","Data":"8f955601b19f5bf6db7f867459c4dca20405754c9b67bcfb52101da8c2d8b9cc"} Jan 23 18:55:47 crc kubenswrapper[4760]: I0123 18:55:47.136032 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.136008118 podStartE2EDuration="3.136008118s" podCreationTimestamp="2026-01-23 18:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:55:47.13209914 +0000 UTC m=+3290.134557083" watchObservedRunningTime="2026-01-23 18:55:47.136008118 +0000 UTC m=+3290.138466051" Jan 23 18:55:47 crc kubenswrapper[4760]: I0123 18:55:47.603289 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:55:47 crc kubenswrapper[4760]: E0123 18:55:47.603921 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:55:52 crc kubenswrapper[4760]: I0123 18:55:52.810561 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hxql5"] Jan 23 18:55:52 crc kubenswrapper[4760]: I0123 18:55:52.812665 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:52 crc kubenswrapper[4760]: I0123 18:55:52.840827 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxql5"] Jan 23 18:55:52 crc kubenswrapper[4760]: I0123 18:55:52.899361 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-catalog-content\") pod \"redhat-operators-hxql5\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:52 crc kubenswrapper[4760]: I0123 18:55:52.899724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6ps\" (UniqueName: \"kubernetes.io/projected/20507db4-b63b-4af1-bb61-4ca56f36ce43-kube-api-access-8w6ps\") pod \"redhat-operators-hxql5\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:52 crc kubenswrapper[4760]: I0123 18:55:52.899763 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-utilities\") pod \"redhat-operators-hxql5\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:53 crc kubenswrapper[4760]: I0123 18:55:53.002604 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-catalog-content\") pod \"redhat-operators-hxql5\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:53 crc kubenswrapper[4760]: I0123 18:55:53.002672 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6ps\" (UniqueName: \"kubernetes.io/projected/20507db4-b63b-4af1-bb61-4ca56f36ce43-kube-api-access-8w6ps\") pod \"redhat-operators-hxql5\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:53 crc kubenswrapper[4760]: I0123 18:55:53.002705 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-utilities\") pod \"redhat-operators-hxql5\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:53 crc kubenswrapper[4760]: I0123 18:55:53.003478 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-utilities\") pod \"redhat-operators-hxql5\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:53 crc kubenswrapper[4760]: I0123 18:55:53.003727 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-catalog-content\") pod \"redhat-operators-hxql5\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:53 crc kubenswrapper[4760]: I0123 18:55:53.023148 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6ps\" (UniqueName: \"kubernetes.io/projected/20507db4-b63b-4af1-bb61-4ca56f36ce43-kube-api-access-8w6ps\") pod \"redhat-operators-hxql5\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:53 crc kubenswrapper[4760]: I0123 18:55:53.134601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:55:53 crc kubenswrapper[4760]: I0123 18:55:53.589094 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxql5"] Jan 23 18:55:54 crc kubenswrapper[4760]: I0123 18:55:54.179564 4760 generic.go:334] "Generic (PLEG): container finished" podID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerID="62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a" exitCode=0 Jan 23 18:55:54 crc kubenswrapper[4760]: I0123 18:55:54.179645 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxql5" event={"ID":"20507db4-b63b-4af1-bb61-4ca56f36ce43","Type":"ContainerDied","Data":"62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a"} Jan 23 18:55:54 crc kubenswrapper[4760]: I0123 18:55:54.179997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxql5" event={"ID":"20507db4-b63b-4af1-bb61-4ca56f36ce43","Type":"ContainerStarted","Data":"5ee364da3c36f9ae6ac8a74f17db610bf36a4d86df8c90e442be34ad7f992b98"} Jan 23 18:55:54 crc kubenswrapper[4760]: I0123 18:55:54.770442 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 23 18:55:55 crc kubenswrapper[4760]: I0123 18:55:55.193255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxql5" event={"ID":"20507db4-b63b-4af1-bb61-4ca56f36ce43","Type":"ContainerStarted","Data":"55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536"} Jan 23 18:55:56 crc kubenswrapper[4760]: I0123 18:55:56.979078 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 23 18:55:57 crc kubenswrapper[4760]: I0123 18:55:57.211737 4760 generic.go:334] "Generic (PLEG): container finished" podID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerID="55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536" exitCode=0 Jan 23 18:55:57 crc kubenswrapper[4760]: I0123 18:55:57.211786 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxql5" event={"ID":"20507db4-b63b-4af1-bb61-4ca56f36ce43","Type":"ContainerDied","Data":"55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536"} Jan 23 18:56:00 crc kubenswrapper[4760]: I0123 18:56:00.244249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxql5" event={"ID":"20507db4-b63b-4af1-bb61-4ca56f36ce43","Type":"ContainerStarted","Data":"070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84"} Jan 23 18:56:00 crc kubenswrapper[4760]: I0123 18:56:00.263520 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hxql5" podStartSLOduration=3.130243205 podStartE2EDuration="8.263503155s" podCreationTimestamp="2026-01-23 18:55:52 +0000 UTC" firstStartedPulling="2026-01-23 18:55:54.181201497 +0000 UTC m=+3297.183659430" lastFinishedPulling="2026-01-23 18:55:59.314461447 +0000 UTC m=+3302.316919380" observedRunningTime="2026-01-23 18:56:00.262308062 +0000 UTC m=+3303.264766035" watchObservedRunningTime="2026-01-23 18:56:00.263503155 +0000 UTC m=+3303.265961088" Jan 23 18:56:00 crc kubenswrapper[4760]: I0123 18:56:00.595918 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:56:00 crc kubenswrapper[4760]: E0123 18:56:00.596207 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:56:03 crc kubenswrapper[4760]: I0123 18:56:03.135535 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:56:03 crc kubenswrapper[4760]: I0123 18:56:03.136038 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:56:04 crc kubenswrapper[4760]: I0123 18:56:04.206011 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxql5" podUID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerName="registry-server" probeResult="failure" output=< Jan 23 18:56:04 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 23 18:56:04 crc kubenswrapper[4760]: > Jan 23 18:56:04 crc kubenswrapper[4760]: I0123 18:56:04.359967 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 18:56:06 crc kubenswrapper[4760]: I0123 18:56:06.423035 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 23 18:56:13 crc kubenswrapper[4760]: I0123 18:56:13.182435 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:56:13 crc kubenswrapper[4760]: I0123 18:56:13.241117 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:56:13 crc kubenswrapper[4760]: I0123 18:56:13.431653 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxql5"] Jan 23 18:56:13 crc kubenswrapper[4760]: I0123 18:56:13.595222 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:56:13 crc kubenswrapper[4760]: E0123 18:56:13.595496 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 18:56:14 crc kubenswrapper[4760]: I0123 18:56:14.380482 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hxql5" podUID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerName="registry-server" containerID="cri-o://070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84" gracePeriod=2 Jan 23 18:56:14 crc kubenswrapper[4760]: I0123 18:56:14.867446 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:56:14 crc kubenswrapper[4760]: I0123 18:56:14.983249 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-catalog-content\") pod \"20507db4-b63b-4af1-bb61-4ca56f36ce43\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " Jan 23 18:56:14 crc kubenswrapper[4760]: I0123 18:56:14.983371 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w6ps\" (UniqueName: \"kubernetes.io/projected/20507db4-b63b-4af1-bb61-4ca56f36ce43-kube-api-access-8w6ps\") pod \"20507db4-b63b-4af1-bb61-4ca56f36ce43\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " Jan 23 18:56:14 crc kubenswrapper[4760]: I0123 18:56:14.983628 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-utilities\") pod \"20507db4-b63b-4af1-bb61-4ca56f36ce43\" (UID: \"20507db4-b63b-4af1-bb61-4ca56f36ce43\") " Jan 23 18:56:14 crc kubenswrapper[4760]: I0123 18:56:14.984713 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-utilities" (OuterVolumeSpecName: "utilities") pod "20507db4-b63b-4af1-bb61-4ca56f36ce43" (UID: "20507db4-b63b-4af1-bb61-4ca56f36ce43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:56:14 crc kubenswrapper[4760]: I0123 18:56:14.992333 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20507db4-b63b-4af1-bb61-4ca56f36ce43-kube-api-access-8w6ps" (OuterVolumeSpecName: "kube-api-access-8w6ps") pod "20507db4-b63b-4af1-bb61-4ca56f36ce43" (UID: "20507db4-b63b-4af1-bb61-4ca56f36ce43"). InnerVolumeSpecName "kube-api-access-8w6ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.085790 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.086269 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w6ps\" (UniqueName: \"kubernetes.io/projected/20507db4-b63b-4af1-bb61-4ca56f36ce43-kube-api-access-8w6ps\") on node \"crc\" DevicePath \"\"" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.101850 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20507db4-b63b-4af1-bb61-4ca56f36ce43" (UID: "20507db4-b63b-4af1-bb61-4ca56f36ce43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.187756 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20507db4-b63b-4af1-bb61-4ca56f36ce43-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.393668 4760 generic.go:334] "Generic (PLEG): container finished" podID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerID="070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84" exitCode=0 Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.393722 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxql5" event={"ID":"20507db4-b63b-4af1-bb61-4ca56f36ce43","Type":"ContainerDied","Data":"070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84"} Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.393783 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxql5" event={"ID":"20507db4-b63b-4af1-bb61-4ca56f36ce43","Type":"ContainerDied","Data":"5ee364da3c36f9ae6ac8a74f17db610bf36a4d86df8c90e442be34ad7f992b98"} Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.393778 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxql5" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.393800 4760 scope.go:117] "RemoveContainer" containerID="070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.423723 4760 scope.go:117] "RemoveContainer" containerID="55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.439815 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxql5"] Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.451278 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hxql5"] Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.456509 4760 scope.go:117] "RemoveContainer" containerID="62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.498608 4760 scope.go:117] "RemoveContainer" containerID="070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84" Jan 23 18:56:15 crc kubenswrapper[4760]: E0123 18:56:15.499120 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84\": container with ID starting with 070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84 not found: ID does not exist" containerID="070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.499161 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84"} err="failed to get container status \"070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84\": rpc error: code = NotFound desc = could not find container \"070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84\": container with ID starting with 070a6e1948145892b71dd34d751a4ccfe0f874a9d2dbd8ad7c86adb970047a84 not found: ID does not exist" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.499185 4760 scope.go:117] "RemoveContainer" containerID="55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536" Jan 23 18:56:15 crc kubenswrapper[4760]: E0123 18:56:15.499456 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536\": container with ID starting with 55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536 not found: ID does not exist" containerID="55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.499485 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536"} err="failed to get container status \"55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536\": rpc error: code = NotFound desc = could not find container \"55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536\": container with ID starting with 55e81f49db877e4e055514d16cb4d37db89b0dc6ac071559fcd640085e215536 not found: ID does not exist" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.499503 4760 scope.go:117] "RemoveContainer" containerID="62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a" Jan 23 18:56:15 crc kubenswrapper[4760]: E0123 18:56:15.499750 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a\": container with ID starting with 62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a not found: ID does not exist" containerID="62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.499775 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a"} err="failed to get container status \"62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a\": rpc error: code = NotFound desc = could not find container \"62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a\": container with ID starting with 62489d9c045b3137edcf5d33d1b7f8bb59563a500f04c87ed56e7badccbdae5a not found: ID does not exist" Jan 23 18:56:15 crc kubenswrapper[4760]: I0123 18:56:15.607040 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20507db4-b63b-4af1-bb61-4ca56f36ce43" path="/var/lib/kubelet/pods/20507db4-b63b-4af1-bb61-4ca56f36ce43/volumes" Jan 23 18:56:28 crc kubenswrapper[4760]: I0123 18:56:28.596601 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:56:29 crc kubenswrapper[4760]: I0123 18:56:29.510243 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"3c694d57098c16a3b8973f208f48d94b5b5aff3868cb5bb87918942c66a55f5c"} Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.931701 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 18:56:58 crc kubenswrapper[4760]: E0123 18:56:58.932647 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerName="registry-server" Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.932667 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerName="registry-server" Jan 23 18:56:58 crc kubenswrapper[4760]: E0123 18:56:58.932689 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerName="extract-content" Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.932696 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerName="extract-content" Jan 23 18:56:58 crc kubenswrapper[4760]: E0123 18:56:58.932717 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerName="extract-utilities" Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.932723 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerName="extract-utilities" Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.932906 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="20507db4-b63b-4af1-bb61-4ca56f36ce43" containerName="registry-server" Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.933660 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.936072 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.936257 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xkghz" Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.936593 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.937099 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 23 18:56:58 crc kubenswrapper[4760]: I0123 18:56:58.942615 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.106553 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.106608 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.106639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.106698 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.106730 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmftz\" (UniqueName: \"kubernetes.io/projected/6b58d560-8084-471f-a385-c36ce2d28bd8-kube-api-access-gmftz\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.106750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-config-data\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.106784 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.106930 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.107066 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.209469 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.209546 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmftz\" (UniqueName: \"kubernetes.io/projected/6b58d560-8084-471f-a385-c36ce2d28bd8-kube-api-access-gmftz\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.209575 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-config-data\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.209622 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.209746 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.209826 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.209914 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.209954 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.209984 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.210496 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.211098 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.211652 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-config-data\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.213007 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.214971 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.216426 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.227836 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.228751 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.229324 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmftz\" (UniqueName: \"kubernetes.io/projected/6b58d560-8084-471f-a385-c36ce2d28bd8-kube-api-access-gmftz\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.260020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.300096 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 18:56:59 crc kubenswrapper[4760]: I0123 18:56:59.775773 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 18:56:59 crc kubenswrapper[4760]: W0123 18:56:59.780580 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b58d560_8084_471f_a385_c36ce2d28bd8.slice/crio-e5d6fc62d8decc361548fef21c0bab807b6ebe45f5316bbda8a741f66a155446 WatchSource:0}: Error finding container e5d6fc62d8decc361548fef21c0bab807b6ebe45f5316bbda8a741f66a155446: Status 404 returned error can't find the container with id e5d6fc62d8decc361548fef21c0bab807b6ebe45f5316bbda8a741f66a155446 Jan 23 18:57:00 crc kubenswrapper[4760]: I0123 18:57:00.777290 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6b58d560-8084-471f-a385-c36ce2d28bd8","Type":"ContainerStarted","Data":"e5d6fc62d8decc361548fef21c0bab807b6ebe45f5316bbda8a741f66a155446"} Jan 23 18:57:30 crc kubenswrapper[4760]: E0123 18:57:30.224384 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 23 18:57:30 crc kubenswrapper[4760]: E0123 18:57:30.225085 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gmftz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(6b58d560-8084-471f-a385-c36ce2d28bd8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:57:30 crc kubenswrapper[4760]: E0123 18:57:30.226944 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="6b58d560-8084-471f-a385-c36ce2d28bd8" Jan 23 18:57:30 crc kubenswrapper[4760]: E0123 18:57:30.374802 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="6b58d560-8084-471f-a385-c36ce2d28bd8" Jan 23 18:57:44 crc kubenswrapper[4760]: I0123 18:57:44.507238 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6b58d560-8084-471f-a385-c36ce2d28bd8","Type":"ContainerStarted","Data":"71dc049cb52a15c923f9de9e5014f4db0ebec00e41ea073d44b4dca135b3cf55"} Jan 23 18:57:44 crc kubenswrapper[4760]: I0123 18:57:44.533906 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.261644457 podStartE2EDuration="47.533878971s" podCreationTimestamp="2026-01-23 18:56:57 +0000 UTC" firstStartedPulling="2026-01-23 18:56:59.783208067 +0000 UTC m=+3362.785665990" lastFinishedPulling="2026-01-23 18:57:43.055442571 +0000 UTC m=+3406.057900504" observedRunningTime="2026-01-23 18:57:44.522312752 +0000 UTC m=+3407.524770705" watchObservedRunningTime="2026-01-23 18:57:44.533878971 +0000 UTC m=+3407.536336904" Jan 23 18:58:46 crc kubenswrapper[4760]: I0123 18:58:46.076179 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:58:46 crc kubenswrapper[4760]: I0123 18:58:46.077073 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:59:16 crc kubenswrapper[4760]: I0123 18:59:16.076166 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:59:16 crc kubenswrapper[4760]: I0123 18:59:16.076692 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:59:46 crc kubenswrapper[4760]: I0123 18:59:46.076891 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:59:46 crc kubenswrapper[4760]: I0123 18:59:46.077543 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:59:46 crc kubenswrapper[4760]: I0123 18:59:46.077601 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 18:59:46 crc kubenswrapper[4760]: I0123 18:59:46.078581 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c694d57098c16a3b8973f208f48d94b5b5aff3868cb5bb87918942c66a55f5c"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:59:46 crc kubenswrapper[4760]: I0123 18:59:46.078640 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://3c694d57098c16a3b8973f208f48d94b5b5aff3868cb5bb87918942c66a55f5c" gracePeriod=600 Jan 23 18:59:46 crc kubenswrapper[4760]: I0123 18:59:46.694804 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="3c694d57098c16a3b8973f208f48d94b5b5aff3868cb5bb87918942c66a55f5c" exitCode=0 Jan 23 18:59:46 crc kubenswrapper[4760]: I0123 18:59:46.694991 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"3c694d57098c16a3b8973f208f48d94b5b5aff3868cb5bb87918942c66a55f5c"} Jan 23 18:59:46 crc kubenswrapper[4760]: I0123 18:59:46.695078 4760 scope.go:117] "RemoveContainer" containerID="9f05deadd3a4305f44b966d88c69ec3e6f2728f1e29386fc97e34ac9e5f112af" Jan 23 18:59:47 crc kubenswrapper[4760]: I0123 18:59:47.708462 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a"} Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.166110 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486580-75922"] Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.168218 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.172013 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.172386 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.207482 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486580-75922"] Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.352060 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02c68d85-ed01-4e78-be3b-9120fb4da920-config-volume\") pod \"collect-profiles-29486580-75922\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.352104 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02c68d85-ed01-4e78-be3b-9120fb4da920-secret-volume\") pod \"collect-profiles-29486580-75922\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.352143 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbv2b\" (UniqueName: \"kubernetes.io/projected/02c68d85-ed01-4e78-be3b-9120fb4da920-kube-api-access-hbv2b\") pod \"collect-profiles-29486580-75922\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.453761 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02c68d85-ed01-4e78-be3b-9120fb4da920-config-volume\") pod \"collect-profiles-29486580-75922\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.453829 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02c68d85-ed01-4e78-be3b-9120fb4da920-secret-volume\") pod \"collect-profiles-29486580-75922\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.453888 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbv2b\" (UniqueName: \"kubernetes.io/projected/02c68d85-ed01-4e78-be3b-9120fb4da920-kube-api-access-hbv2b\") pod \"collect-profiles-29486580-75922\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.454810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02c68d85-ed01-4e78-be3b-9120fb4da920-config-volume\") pod \"collect-profiles-29486580-75922\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.460219 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02c68d85-ed01-4e78-be3b-9120fb4da920-secret-volume\") pod \"collect-profiles-29486580-75922\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.473101 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbv2b\" (UniqueName: \"kubernetes.io/projected/02c68d85-ed01-4e78-be3b-9120fb4da920-kube-api-access-hbv2b\") pod \"collect-profiles-29486580-75922\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.505435 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:00 crc kubenswrapper[4760]: I0123 19:00:00.994652 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486580-75922"] Jan 23 19:00:01 crc kubenswrapper[4760]: I0123 19:00:01.849415 4760 generic.go:334] "Generic (PLEG): container finished" podID="02c68d85-ed01-4e78-be3b-9120fb4da920" containerID="8885393bf42009ce54e1dd2c7e5d3b770f5aa31a795b1892d97b1d899da5224e" exitCode=0 Jan 23 19:00:01 crc kubenswrapper[4760]: I0123 19:00:01.849530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" event={"ID":"02c68d85-ed01-4e78-be3b-9120fb4da920","Type":"ContainerDied","Data":"8885393bf42009ce54e1dd2c7e5d3b770f5aa31a795b1892d97b1d899da5224e"} Jan 23 19:00:01 crc kubenswrapper[4760]: I0123 19:00:01.849877 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" event={"ID":"02c68d85-ed01-4e78-be3b-9120fb4da920","Type":"ContainerStarted","Data":"8330f65d25fdf07094437dd2d39a30e54e08856bf5292dbedf69a3110df1031d"} Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.217092 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.312797 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbv2b\" (UniqueName: \"kubernetes.io/projected/02c68d85-ed01-4e78-be3b-9120fb4da920-kube-api-access-hbv2b\") pod \"02c68d85-ed01-4e78-be3b-9120fb4da920\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.312889 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02c68d85-ed01-4e78-be3b-9120fb4da920-secret-volume\") pod \"02c68d85-ed01-4e78-be3b-9120fb4da920\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.312979 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02c68d85-ed01-4e78-be3b-9120fb4da920-config-volume\") pod \"02c68d85-ed01-4e78-be3b-9120fb4da920\" (UID: \"02c68d85-ed01-4e78-be3b-9120fb4da920\") " Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.313959 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c68d85-ed01-4e78-be3b-9120fb4da920-config-volume" (OuterVolumeSpecName: "config-volume") pod "02c68d85-ed01-4e78-be3b-9120fb4da920" (UID: "02c68d85-ed01-4e78-be3b-9120fb4da920"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.319725 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c68d85-ed01-4e78-be3b-9120fb4da920-kube-api-access-hbv2b" (OuterVolumeSpecName: "kube-api-access-hbv2b") pod "02c68d85-ed01-4e78-be3b-9120fb4da920" (UID: "02c68d85-ed01-4e78-be3b-9120fb4da920"). InnerVolumeSpecName "kube-api-access-hbv2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.319781 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c68d85-ed01-4e78-be3b-9120fb4da920-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "02c68d85-ed01-4e78-be3b-9120fb4da920" (UID: "02c68d85-ed01-4e78-be3b-9120fb4da920"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.415568 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02c68d85-ed01-4e78-be3b-9120fb4da920-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.415598 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02c68d85-ed01-4e78-be3b-9120fb4da920-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.415609 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbv2b\" (UniqueName: \"kubernetes.io/projected/02c68d85-ed01-4e78-be3b-9120fb4da920-kube-api-access-hbv2b\") on node \"crc\" DevicePath \"\"" Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.866260 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" event={"ID":"02c68d85-ed01-4e78-be3b-9120fb4da920","Type":"ContainerDied","Data":"8330f65d25fdf07094437dd2d39a30e54e08856bf5292dbedf69a3110df1031d"} Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.866528 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8330f65d25fdf07094437dd2d39a30e54e08856bf5292dbedf69a3110df1031d" Jan 23 19:00:03 crc kubenswrapper[4760]: I0123 19:00:03.866632 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-75922" Jan 23 19:00:04 crc kubenswrapper[4760]: I0123 19:00:04.304253 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd"] Jan 23 19:00:04 crc kubenswrapper[4760]: I0123 19:00:04.312612 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-gkjcd"] Jan 23 19:00:05 crc kubenswrapper[4760]: I0123 19:00:05.609074 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be58c19-65a9-47f0-b186-26ebc2e47ae7" path="/var/lib/kubelet/pods/0be58c19-65a9-47f0-b186-26ebc2e47ae7/volumes" Jan 23 19:00:07 crc kubenswrapper[4760]: I0123 19:00:07.709541 4760 scope.go:117] "RemoveContainer" containerID="55ecdea62c38ccbd165b87678715f072b312af424dff675cc873672dea7d5ad4" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.116252 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b5s6q"] Jan 23 19:00:53 crc kubenswrapper[4760]: E0123 19:00:53.117329 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c68d85-ed01-4e78-be3b-9120fb4da920" containerName="collect-profiles" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.117340 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c68d85-ed01-4e78-be3b-9120fb4da920" containerName="collect-profiles" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.117516 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c68d85-ed01-4e78-be3b-9120fb4da920" containerName="collect-profiles" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.118815 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.134015 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5s6q"] Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.172944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grjb4\" (UniqueName: \"kubernetes.io/projected/57995f48-ee7b-427a-b816-e91c365c39fd-kube-api-access-grjb4\") pod \"certified-operators-b5s6q\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.173059 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-catalog-content\") pod \"certified-operators-b5s6q\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.173112 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-utilities\") pod \"certified-operators-b5s6q\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.274871 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grjb4\" (UniqueName: \"kubernetes.io/projected/57995f48-ee7b-427a-b816-e91c365c39fd-kube-api-access-grjb4\") pod \"certified-operators-b5s6q\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.274919 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-catalog-content\") pod \"certified-operators-b5s6q\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.274958 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-utilities\") pod \"certified-operators-b5s6q\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.275625 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-utilities\") pod \"certified-operators-b5s6q\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.275661 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-catalog-content\") pod \"certified-operators-b5s6q\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.294401 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grjb4\" (UniqueName: \"kubernetes.io/projected/57995f48-ee7b-427a-b816-e91c365c39fd-kube-api-access-grjb4\") pod \"certified-operators-b5s6q\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:53 crc kubenswrapper[4760]: I0123 19:00:53.462124 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:00:54 crc kubenswrapper[4760]: I0123 19:00:54.050886 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5s6q"] Jan 23 19:00:54 crc kubenswrapper[4760]: I0123 19:00:54.378294 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5s6q" event={"ID":"57995f48-ee7b-427a-b816-e91c365c39fd","Type":"ContainerStarted","Data":"60ef42c5ed0be369079a3baf58c3256a8b003ac7c96e99439654080e14b7d1dd"} Jan 23 19:00:57 crc kubenswrapper[4760]: I0123 19:00:57.420477 4760 generic.go:334] "Generic (PLEG): container finished" podID="57995f48-ee7b-427a-b816-e91c365c39fd" containerID="e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519" exitCode=0 Jan 23 19:00:57 crc kubenswrapper[4760]: I0123 19:00:57.420622 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5s6q" event={"ID":"57995f48-ee7b-427a-b816-e91c365c39fd","Type":"ContainerDied","Data":"e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519"} Jan 23 19:00:57 crc kubenswrapper[4760]: I0123 19:00:57.423571 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.172785 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486581-qdnbh"] Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.174546 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.191596 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486581-qdnbh"] Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.315258 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-fernet-keys\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.315686 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-config-data\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.315723 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-combined-ca-bundle\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.315810 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd2kb\" (UniqueName: \"kubernetes.io/projected/f163dd36-7bd7-4821-9874-8eb534d2c03d-kube-api-access-gd2kb\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.417176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd2kb\" (UniqueName: \"kubernetes.io/projected/f163dd36-7bd7-4821-9874-8eb534d2c03d-kube-api-access-gd2kb\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.417259 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-fernet-keys\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.417319 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-config-data\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.417348 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-combined-ca-bundle\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.425433 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-combined-ca-bundle\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.426374 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-fernet-keys\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.427157 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-config-data\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.434292 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd2kb\" (UniqueName: \"kubernetes.io/projected/f163dd36-7bd7-4821-9874-8eb534d2c03d-kube-api-access-gd2kb\") pod \"keystone-cron-29486581-qdnbh\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.448262 4760 generic.go:334] "Generic (PLEG): container finished" podID="57995f48-ee7b-427a-b816-e91c365c39fd" containerID="9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000" exitCode=0 Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.448308 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5s6q" event={"ID":"57995f48-ee7b-427a-b816-e91c365c39fd","Type":"ContainerDied","Data":"9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000"} Jan 23 19:01:00 crc kubenswrapper[4760]: I0123 19:01:00.490574 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:01 crc kubenswrapper[4760]: I0123 19:01:01.029172 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486581-qdnbh"] Jan 23 19:01:01 crc kubenswrapper[4760]: I0123 19:01:01.457796 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486581-qdnbh" event={"ID":"f163dd36-7bd7-4821-9874-8eb534d2c03d","Type":"ContainerStarted","Data":"899bb55e0c3f6533fca4a1dcdf1fd9eb9637c2922eaf0fff46093c939402f427"} Jan 23 19:01:01 crc kubenswrapper[4760]: I0123 19:01:01.457842 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486581-qdnbh" event={"ID":"f163dd36-7bd7-4821-9874-8eb534d2c03d","Type":"ContainerStarted","Data":"3ec8725832d9c70b06e0fdccf3946d8c89bd68cdc8169a619dfbb094c3cf1385"} Jan 23 19:01:01 crc kubenswrapper[4760]: I0123 19:01:01.474097 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29486581-qdnbh" podStartSLOduration=1.474074127 podStartE2EDuration="1.474074127s" podCreationTimestamp="2026-01-23 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:01:01.472331829 +0000 UTC m=+3604.474789762" watchObservedRunningTime="2026-01-23 19:01:01.474074127 +0000 UTC m=+3604.476532080" Jan 23 19:01:03 crc kubenswrapper[4760]: I0123 19:01:03.477444 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5s6q" event={"ID":"57995f48-ee7b-427a-b816-e91c365c39fd","Type":"ContainerStarted","Data":"2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9"} Jan 23 19:01:03 crc kubenswrapper[4760]: I0123 19:01:03.513795 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b5s6q" podStartSLOduration=6.395674268 podStartE2EDuration="10.513778067s" podCreationTimestamp="2026-01-23 19:00:53 +0000 UTC" firstStartedPulling="2026-01-23 19:00:57.423277285 +0000 UTC m=+3600.425735218" lastFinishedPulling="2026-01-23 19:01:01.541381084 +0000 UTC m=+3604.543839017" observedRunningTime="2026-01-23 19:01:03.495572845 +0000 UTC m=+3606.498030788" watchObservedRunningTime="2026-01-23 19:01:03.513778067 +0000 UTC m=+3606.516236000" Jan 23 19:01:05 crc kubenswrapper[4760]: I0123 19:01:05.495025 4760 generic.go:334] "Generic (PLEG): container finished" podID="f163dd36-7bd7-4821-9874-8eb534d2c03d" containerID="899bb55e0c3f6533fca4a1dcdf1fd9eb9637c2922eaf0fff46093c939402f427" exitCode=0 Jan 23 19:01:05 crc kubenswrapper[4760]: I0123 19:01:05.495102 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486581-qdnbh" event={"ID":"f163dd36-7bd7-4821-9874-8eb534d2c03d","Type":"ContainerDied","Data":"899bb55e0c3f6533fca4a1dcdf1fd9eb9637c2922eaf0fff46093c939402f427"} Jan 23 19:01:06 crc kubenswrapper[4760]: I0123 19:01:06.962101 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.066065 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd2kb\" (UniqueName: \"kubernetes.io/projected/f163dd36-7bd7-4821-9874-8eb534d2c03d-kube-api-access-gd2kb\") pod \"f163dd36-7bd7-4821-9874-8eb534d2c03d\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.066261 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-fernet-keys\") pod \"f163dd36-7bd7-4821-9874-8eb534d2c03d\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.066316 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-combined-ca-bundle\") pod \"f163dd36-7bd7-4821-9874-8eb534d2c03d\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.066386 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-config-data\") pod \"f163dd36-7bd7-4821-9874-8eb534d2c03d\" (UID: \"f163dd36-7bd7-4821-9874-8eb534d2c03d\") " Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.075138 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f163dd36-7bd7-4821-9874-8eb534d2c03d" (UID: "f163dd36-7bd7-4821-9874-8eb534d2c03d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.076107 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f163dd36-7bd7-4821-9874-8eb534d2c03d-kube-api-access-gd2kb" (OuterVolumeSpecName: "kube-api-access-gd2kb") pod "f163dd36-7bd7-4821-9874-8eb534d2c03d" (UID: "f163dd36-7bd7-4821-9874-8eb534d2c03d"). InnerVolumeSpecName "kube-api-access-gd2kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.100571 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f163dd36-7bd7-4821-9874-8eb534d2c03d" (UID: "f163dd36-7bd7-4821-9874-8eb534d2c03d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.127657 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-config-data" (OuterVolumeSpecName: "config-data") pod "f163dd36-7bd7-4821-9874-8eb534d2c03d" (UID: "f163dd36-7bd7-4821-9874-8eb534d2c03d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.168948 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.169006 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd2kb\" (UniqueName: \"kubernetes.io/projected/f163dd36-7bd7-4821-9874-8eb534d2c03d-kube-api-access-gd2kb\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.169022 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.169032 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f163dd36-7bd7-4821-9874-8eb534d2c03d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.524351 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486581-qdnbh" event={"ID":"f163dd36-7bd7-4821-9874-8eb534d2c03d","Type":"ContainerDied","Data":"3ec8725832d9c70b06e0fdccf3946d8c89bd68cdc8169a619dfbb094c3cf1385"} Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.528859 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ec8725832d9c70b06e0fdccf3946d8c89bd68cdc8169a619dfbb094c3cf1385" Jan 23 19:01:07 crc kubenswrapper[4760]: I0123 19:01:07.524813 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486581-qdnbh" Jan 23 19:01:13 crc kubenswrapper[4760]: I0123 19:01:13.462381 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:01:13 crc kubenswrapper[4760]: I0123 19:01:13.462915 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:01:13 crc kubenswrapper[4760]: I0123 19:01:13.510716 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:01:13 crc kubenswrapper[4760]: I0123 19:01:13.619107 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:01:13 crc kubenswrapper[4760]: I0123 19:01:13.750539 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5s6q"] Jan 23 19:01:15 crc kubenswrapper[4760]: I0123 19:01:15.587058 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b5s6q" podUID="57995f48-ee7b-427a-b816-e91c365c39fd" containerName="registry-server" containerID="cri-o://2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9" gracePeriod=2 Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.167683 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.245832 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-catalog-content\") pod \"57995f48-ee7b-427a-b816-e91c365c39fd\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.245882 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grjb4\" (UniqueName: \"kubernetes.io/projected/57995f48-ee7b-427a-b816-e91c365c39fd-kube-api-access-grjb4\") pod \"57995f48-ee7b-427a-b816-e91c365c39fd\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.245906 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-utilities\") pod \"57995f48-ee7b-427a-b816-e91c365c39fd\" (UID: \"57995f48-ee7b-427a-b816-e91c365c39fd\") " Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.247083 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-utilities" (OuterVolumeSpecName: "utilities") pod "57995f48-ee7b-427a-b816-e91c365c39fd" (UID: "57995f48-ee7b-427a-b816-e91c365c39fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.261620 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57995f48-ee7b-427a-b816-e91c365c39fd-kube-api-access-grjb4" (OuterVolumeSpecName: "kube-api-access-grjb4") pod "57995f48-ee7b-427a-b816-e91c365c39fd" (UID: "57995f48-ee7b-427a-b816-e91c365c39fd"). InnerVolumeSpecName "kube-api-access-grjb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.313834 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57995f48-ee7b-427a-b816-e91c365c39fd" (UID: "57995f48-ee7b-427a-b816-e91c365c39fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.348513 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.348562 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grjb4\" (UniqueName: \"kubernetes.io/projected/57995f48-ee7b-427a-b816-e91c365c39fd-kube-api-access-grjb4\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.348579 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57995f48-ee7b-427a-b816-e91c365c39fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.598439 4760 generic.go:334] "Generic (PLEG): container finished" podID="57995f48-ee7b-427a-b816-e91c365c39fd" containerID="2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9" exitCode=0 Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.598501 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5s6q" event={"ID":"57995f48-ee7b-427a-b816-e91c365c39fd","Type":"ContainerDied","Data":"2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9"} Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.598792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5s6q" event={"ID":"57995f48-ee7b-427a-b816-e91c365c39fd","Type":"ContainerDied","Data":"60ef42c5ed0be369079a3baf58c3256a8b003ac7c96e99439654080e14b7d1dd"} Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.598537 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5s6q" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.598824 4760 scope.go:117] "RemoveContainer" containerID="2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.621147 4760 scope.go:117] "RemoveContainer" containerID="9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.648927 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5s6q"] Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.664253 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b5s6q"] Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.667474 4760 scope.go:117] "RemoveContainer" containerID="e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.709942 4760 scope.go:117] "RemoveContainer" containerID="2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9" Jan 23 19:01:16 crc kubenswrapper[4760]: E0123 19:01:16.713601 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9\": container with ID starting with 2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9 not found: ID does not exist" containerID="2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.713649 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9"} err="failed to get container status \"2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9\": rpc error: code = NotFound desc = could not find container \"2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9\": container with ID starting with 2c82cb757fdd8447098e5ebb7a18201fed74004ff8c1ad856d367c76905913f9 not found: ID does not exist" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.713686 4760 scope.go:117] "RemoveContainer" containerID="9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000" Jan 23 19:01:16 crc kubenswrapper[4760]: E0123 19:01:16.714196 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000\": container with ID starting with 9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000 not found: ID does not exist" containerID="9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.714250 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000"} err="failed to get container status \"9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000\": rpc error: code = NotFound desc = could not find container \"9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000\": container with ID starting with 9d1c830215d83b49f9588aa704d28f8a7a4f57b1b1d399b865a38ee771f8a000 not found: ID does not exist" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.714277 4760 scope.go:117] "RemoveContainer" containerID="e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519" Jan 23 19:01:16 crc kubenswrapper[4760]: E0123 19:01:16.714662 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519\": container with ID starting with e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519 not found: ID does not exist" containerID="e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519" Jan 23 19:01:16 crc kubenswrapper[4760]: I0123 19:01:16.714710 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519"} err="failed to get container status \"e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519\": rpc error: code = NotFound desc = could not find container \"e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519\": container with ID starting with e9c4f18b85163e602ef29190b5f5d00dcbb5ab0c4a6863cb041a264d1b0c2519 not found: ID does not exist" Jan 23 19:01:18 crc kubenswrapper[4760]: I0123 19:01:18.320778 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57995f48-ee7b-427a-b816-e91c365c39fd" path="/var/lib/kubelet/pods/57995f48-ee7b-427a-b816-e91c365c39fd/volumes" Jan 23 19:01:46 crc kubenswrapper[4760]: I0123 19:01:46.075251 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:01:46 crc kubenswrapper[4760]: I0123 19:01:46.075919 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:02:08 crc kubenswrapper[4760]: I0123 19:02:08.218230 4760 scope.go:117] "RemoveContainer" containerID="71375fc65809656086945ec3169a5457f338b8cfcbe0c33e034b13523cd6b32f" Jan 23 19:02:08 crc kubenswrapper[4760]: I0123 19:02:08.242698 4760 scope.go:117] "RemoveContainer" containerID="30dab61dfd4ebcef352ebbe36b6dec347a12dd7698f910db5f8341e4a997d696" Jan 23 19:02:08 crc kubenswrapper[4760]: I0123 19:02:08.264817 4760 scope.go:117] "RemoveContainer" containerID="08971d2d0d6b1d60e28f1bd953dcb6ac4bb3d3df81581d4c4b6c0008758b9488" Jan 23 19:02:08 crc kubenswrapper[4760]: I0123 19:02:08.289037 4760 scope.go:117] "RemoveContainer" containerID="59a203e96e41013107bad19dee77d8e041d1ab7d1544dfc680a5d3d7061dc5a5" Jan 23 19:02:16 crc kubenswrapper[4760]: I0123 19:02:16.075895 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:02:16 crc kubenswrapper[4760]: I0123 19:02:16.076662 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:02:46 crc kubenswrapper[4760]: I0123 19:02:46.075481 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:02:46 crc kubenswrapper[4760]: I0123 19:02:46.076030 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:02:46 crc kubenswrapper[4760]: I0123 19:02:46.076081 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 19:02:46 crc kubenswrapper[4760]: I0123 19:02:46.076959 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:02:46 crc kubenswrapper[4760]: I0123 19:02:46.077018 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" gracePeriod=600 Jan 23 19:02:46 crc kubenswrapper[4760]: E0123 19:02:46.193431 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:02:47 crc kubenswrapper[4760]: I0123 19:02:47.103853 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" exitCode=0 Jan 23 19:02:47 crc kubenswrapper[4760]: I0123 19:02:47.103936 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a"} Jan 23 19:02:47 crc kubenswrapper[4760]: I0123 19:02:47.104151 4760 scope.go:117] "RemoveContainer" containerID="3c694d57098c16a3b8973f208f48d94b5b5aff3868cb5bb87918942c66a55f5c" Jan 23 19:02:47 crc kubenswrapper[4760]: I0123 19:02:47.104833 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:02:47 crc kubenswrapper[4760]: E0123 19:02:47.105061 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.177442 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wkl8g"] Jan 23 19:02:58 crc kubenswrapper[4760]: E0123 19:02:58.179299 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57995f48-ee7b-427a-b816-e91c365c39fd" containerName="extract-content" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.179378 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="57995f48-ee7b-427a-b816-e91c365c39fd" containerName="extract-content" Jan 23 19:02:58 crc kubenswrapper[4760]: E0123 19:02:58.179474 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f163dd36-7bd7-4821-9874-8eb534d2c03d" containerName="keystone-cron" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.179534 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f163dd36-7bd7-4821-9874-8eb534d2c03d" containerName="keystone-cron" Jan 23 19:02:58 crc kubenswrapper[4760]: E0123 19:02:58.179592 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57995f48-ee7b-427a-b816-e91c365c39fd" containerName="registry-server" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.179651 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="57995f48-ee7b-427a-b816-e91c365c39fd" containerName="registry-server" Jan 23 19:02:58 crc kubenswrapper[4760]: E0123 19:02:58.179730 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57995f48-ee7b-427a-b816-e91c365c39fd" containerName="extract-utilities" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.179811 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="57995f48-ee7b-427a-b816-e91c365c39fd" containerName="extract-utilities" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.180063 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f163dd36-7bd7-4821-9874-8eb534d2c03d" containerName="keystone-cron" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.180185 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="57995f48-ee7b-427a-b816-e91c365c39fd" containerName="registry-server" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.181792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.201469 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkl8g"] Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.341560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-utilities\") pod \"community-operators-wkl8g\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.341645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-catalog-content\") pod \"community-operators-wkl8g\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.341757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb5p8\" (UniqueName: \"kubernetes.io/projected/918a2349-f139-4017-bfbd-0c1ba69eb930-kube-api-access-tb5p8\") pod \"community-operators-wkl8g\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.443180 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb5p8\" (UniqueName: \"kubernetes.io/projected/918a2349-f139-4017-bfbd-0c1ba69eb930-kube-api-access-tb5p8\") pod \"community-operators-wkl8g\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.443323 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-utilities\") pod \"community-operators-wkl8g\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.443383 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-catalog-content\") pod \"community-operators-wkl8g\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.444012 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-catalog-content\") pod \"community-operators-wkl8g\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.444051 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-utilities\") pod \"community-operators-wkl8g\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.478489 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb5p8\" (UniqueName: \"kubernetes.io/projected/918a2349-f139-4017-bfbd-0c1ba69eb930-kube-api-access-tb5p8\") pod \"community-operators-wkl8g\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:58 crc kubenswrapper[4760]: I0123 19:02:58.506613 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:02:59 crc kubenswrapper[4760]: I0123 19:02:59.134718 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkl8g"] Jan 23 19:02:59 crc kubenswrapper[4760]: I0123 19:02:59.208595 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkl8g" event={"ID":"918a2349-f139-4017-bfbd-0c1ba69eb930","Type":"ContainerStarted","Data":"e85ec484ef96d046ad75135a74a7d7a0bb22c1ac9488e39f457e4d3cb6e4061e"} Jan 23 19:02:59 crc kubenswrapper[4760]: I0123 19:02:59.595656 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:02:59 crc kubenswrapper[4760]: E0123 19:02:59.596177 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:03:00 crc kubenswrapper[4760]: I0123 19:03:00.220796 4760 generic.go:334] "Generic (PLEG): container finished" podID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerID="634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945" exitCode=0 Jan 23 19:03:00 crc kubenswrapper[4760]: I0123 19:03:00.220850 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkl8g" event={"ID":"918a2349-f139-4017-bfbd-0c1ba69eb930","Type":"ContainerDied","Data":"634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945"} Jan 23 19:03:01 crc kubenswrapper[4760]: I0123 19:03:01.236496 4760 generic.go:334] "Generic (PLEG): container finished" podID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerID="b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931" exitCode=0 Jan 23 19:03:01 crc kubenswrapper[4760]: I0123 19:03:01.236612 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkl8g" event={"ID":"918a2349-f139-4017-bfbd-0c1ba69eb930","Type":"ContainerDied","Data":"b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931"} Jan 23 19:03:02 crc kubenswrapper[4760]: I0123 19:03:02.249588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkl8g" event={"ID":"918a2349-f139-4017-bfbd-0c1ba69eb930","Type":"ContainerStarted","Data":"e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2"} Jan 23 19:03:08 crc kubenswrapper[4760]: I0123 19:03:08.507289 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:03:08 crc kubenswrapper[4760]: I0123 19:03:08.507846 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:03:08 crc kubenswrapper[4760]: I0123 19:03:08.556971 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:03:08 crc kubenswrapper[4760]: I0123 19:03:08.578938 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wkl8g" podStartSLOduration=9.056880814 podStartE2EDuration="10.578913796s" podCreationTimestamp="2026-01-23 19:02:58 +0000 UTC" firstStartedPulling="2026-01-23 19:03:00.224156299 +0000 UTC m=+3723.226614222" lastFinishedPulling="2026-01-23 19:03:01.746189271 +0000 UTC m=+3724.748647204" observedRunningTime="2026-01-23 19:03:02.275812529 +0000 UTC m=+3725.278270482" watchObservedRunningTime="2026-01-23 19:03:08.578913796 +0000 UTC m=+3731.581371729" Jan 23 19:03:09 crc kubenswrapper[4760]: I0123 19:03:09.370050 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:03:09 crc kubenswrapper[4760]: I0123 19:03:09.423077 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkl8g"] Jan 23 19:03:11 crc kubenswrapper[4760]: I0123 19:03:11.337947 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wkl8g" podUID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerName="registry-server" containerID="cri-o://e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2" gracePeriod=2 Jan 23 19:03:11 crc kubenswrapper[4760]: I0123 19:03:11.820502 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:03:11 crc kubenswrapper[4760]: I0123 19:03:11.912595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-utilities\") pod \"918a2349-f139-4017-bfbd-0c1ba69eb930\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " Jan 23 19:03:11 crc kubenswrapper[4760]: I0123 19:03:11.913020 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb5p8\" (UniqueName: \"kubernetes.io/projected/918a2349-f139-4017-bfbd-0c1ba69eb930-kube-api-access-tb5p8\") pod \"918a2349-f139-4017-bfbd-0c1ba69eb930\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " Jan 23 19:03:11 crc kubenswrapper[4760]: I0123 19:03:11.913109 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-catalog-content\") pod \"918a2349-f139-4017-bfbd-0c1ba69eb930\" (UID: \"918a2349-f139-4017-bfbd-0c1ba69eb930\") " Jan 23 19:03:11 crc kubenswrapper[4760]: I0123 19:03:11.913531 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-utilities" (OuterVolumeSpecName: "utilities") pod "918a2349-f139-4017-bfbd-0c1ba69eb930" (UID: "918a2349-f139-4017-bfbd-0c1ba69eb930"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:03:11 crc kubenswrapper[4760]: I0123 19:03:11.913735 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:03:11 crc kubenswrapper[4760]: I0123 19:03:11.921801 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918a2349-f139-4017-bfbd-0c1ba69eb930-kube-api-access-tb5p8" (OuterVolumeSpecName: "kube-api-access-tb5p8") pod "918a2349-f139-4017-bfbd-0c1ba69eb930" (UID: "918a2349-f139-4017-bfbd-0c1ba69eb930"). InnerVolumeSpecName "kube-api-access-tb5p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:03:11 crc kubenswrapper[4760]: I0123 19:03:11.957315 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "918a2349-f139-4017-bfbd-0c1ba69eb930" (UID: "918a2349-f139-4017-bfbd-0c1ba69eb930"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.015642 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb5p8\" (UniqueName: \"kubernetes.io/projected/918a2349-f139-4017-bfbd-0c1ba69eb930-kube-api-access-tb5p8\") on node \"crc\" DevicePath \"\"" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.015682 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/918a2349-f139-4017-bfbd-0c1ba69eb930-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.350265 4760 generic.go:334] "Generic (PLEG): container finished" podID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerID="e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2" exitCode=0 Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.350316 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkl8g" event={"ID":"918a2349-f139-4017-bfbd-0c1ba69eb930","Type":"ContainerDied","Data":"e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2"} Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.350343 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkl8g" event={"ID":"918a2349-f139-4017-bfbd-0c1ba69eb930","Type":"ContainerDied","Data":"e85ec484ef96d046ad75135a74a7d7a0bb22c1ac9488e39f457e4d3cb6e4061e"} Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.350362 4760 scope.go:117] "RemoveContainer" containerID="e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.350367 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkl8g" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.370390 4760 scope.go:117] "RemoveContainer" containerID="b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.405557 4760 scope.go:117] "RemoveContainer" containerID="634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.414465 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkl8g"] Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.425903 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wkl8g"] Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.458659 4760 scope.go:117] "RemoveContainer" containerID="e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2" Jan 23 19:03:12 crc kubenswrapper[4760]: E0123 19:03:12.459173 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2\": container with ID starting with e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2 not found: ID does not exist" containerID="e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.459295 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2"} err="failed to get container status \"e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2\": rpc error: code = NotFound desc = could not find container \"e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2\": container with ID starting with e4b3c9f1efffc9ffa9776c9b6cee8757ced2bd5794e15361d356d520580161f2 not found: ID does not exist" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.459386 4760 scope.go:117] "RemoveContainer" containerID="b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931" Jan 23 19:03:12 crc kubenswrapper[4760]: E0123 19:03:12.459833 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931\": container with ID starting with b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931 not found: ID does not exist" containerID="b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.459872 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931"} err="failed to get container status \"b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931\": rpc error: code = NotFound desc = could not find container \"b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931\": container with ID starting with b4f31a1a5ddbf7c75887b2b2451f7bab9cf186fe7fe951d645e1f27f981b1931 not found: ID does not exist" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.459899 4760 scope.go:117] "RemoveContainer" containerID="634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945" Jan 23 19:03:12 crc kubenswrapper[4760]: E0123 19:03:12.460185 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945\": container with ID starting with 634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945 not found: ID does not exist" containerID="634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945" Jan 23 19:03:12 crc kubenswrapper[4760]: I0123 19:03:12.460224 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945"} err="failed to get container status \"634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945\": rpc error: code = NotFound desc = could not find container \"634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945\": container with ID starting with 634979fc1c4db10e95a42fadf407b4d5206fcd09e775c5ca578295adce973945 not found: ID does not exist" Jan 23 19:03:13 crc kubenswrapper[4760]: I0123 19:03:13.610229 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918a2349-f139-4017-bfbd-0c1ba69eb930" path="/var/lib/kubelet/pods/918a2349-f139-4017-bfbd-0c1ba69eb930/volumes" Jan 23 19:03:14 crc kubenswrapper[4760]: I0123 19:03:14.599560 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:03:14 crc kubenswrapper[4760]: E0123 19:03:14.601373 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:03:25 crc kubenswrapper[4760]: I0123 19:03:25.597245 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:03:25 crc kubenswrapper[4760]: E0123 19:03:25.598591 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:03:37 crc kubenswrapper[4760]: I0123 19:03:37.605016 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:03:37 crc kubenswrapper[4760]: E0123 19:03:37.605831 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:03:49 crc kubenswrapper[4760]: I0123 19:03:49.596704 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:03:49 crc kubenswrapper[4760]: E0123 19:03:49.597506 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:04:00 crc kubenswrapper[4760]: I0123 19:04:00.595901 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:04:00 crc kubenswrapper[4760]: E0123 19:04:00.596830 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:04:13 crc kubenswrapper[4760]: I0123 19:04:13.595114 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:04:13 crc kubenswrapper[4760]: E0123 19:04:13.595856 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:04:14 crc kubenswrapper[4760]: I0123 19:04:14.042831 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-thpxd"] Jan 23 19:04:14 crc kubenswrapper[4760]: I0123 19:04:14.054879 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-thpxd"] Jan 23 19:04:15 crc kubenswrapper[4760]: I0123 19:04:15.032789 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-aa79-account-create-update-lsdwl"] Jan 23 19:04:15 crc kubenswrapper[4760]: I0123 19:04:15.042119 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-aa79-account-create-update-lsdwl"] Jan 23 19:04:15 crc kubenswrapper[4760]: I0123 19:04:15.605438 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f83e20-da38-4866-860b-a54d1f424fbd" path="/var/lib/kubelet/pods/54f83e20-da38-4866-860b-a54d1f424fbd/volumes" Jan 23 19:04:15 crc kubenswrapper[4760]: I0123 19:04:15.614663 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a69b0b5-d356-4cf9-85f3-148ce3f7ee14" path="/var/lib/kubelet/pods/9a69b0b5-d356-4cf9-85f3-148ce3f7ee14/volumes" Jan 23 19:04:25 crc kubenswrapper[4760]: I0123 19:04:25.600095 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:04:25 crc kubenswrapper[4760]: E0123 19:04:25.602208 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:04:37 crc kubenswrapper[4760]: I0123 19:04:37.603399 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:04:37 crc kubenswrapper[4760]: E0123 19:04:37.604615 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:04:50 crc kubenswrapper[4760]: I0123 19:04:50.595937 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:04:50 crc kubenswrapper[4760]: E0123 19:04:50.596800 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:05:04 crc kubenswrapper[4760]: I0123 19:05:04.595954 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:05:04 crc kubenswrapper[4760]: E0123 19:05:04.596815 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:05:08 crc kubenswrapper[4760]: I0123 19:05:08.411731 4760 scope.go:117] "RemoveContainer" containerID="29d2cfed54b85d4f8e966c27e5b29e329b9fc2aa69a8b8110c8c87b8e30dd665" Jan 23 19:05:08 crc kubenswrapper[4760]: I0123 19:05:08.438869 4760 scope.go:117] "RemoveContainer" containerID="81cc9abdcb4037186395d162fffe909169ccb3a96d43d46483455e6eed074c76" Jan 23 19:05:10 crc kubenswrapper[4760]: I0123 19:05:10.051491 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-tzrlf"] Jan 23 19:05:10 crc kubenswrapper[4760]: I0123 19:05:10.061336 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-tzrlf"] Jan 23 19:05:11 crc kubenswrapper[4760]: I0123 19:05:11.607053 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e09f2af9-285a-461d-b04c-77b23410dc37" path="/var/lib/kubelet/pods/e09f2af9-285a-461d-b04c-77b23410dc37/volumes" Jan 23 19:05:16 crc kubenswrapper[4760]: I0123 19:05:16.594943 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:05:16 crc kubenswrapper[4760]: E0123 19:05:16.595662 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:05:30 crc kubenswrapper[4760]: I0123 19:05:30.595171 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:05:30 crc kubenswrapper[4760]: E0123 19:05:30.596055 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:05:42 crc kubenswrapper[4760]: I0123 19:05:42.595269 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:05:42 crc kubenswrapper[4760]: E0123 19:05:42.596114 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:05:53 crc kubenswrapper[4760]: I0123 19:05:53.595965 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:05:53 crc kubenswrapper[4760]: E0123 19:05:53.597037 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:05:58 crc kubenswrapper[4760]: I0123 19:05:58.929250 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t8587"] Jan 23 19:05:58 crc kubenswrapper[4760]: E0123 19:05:58.930334 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerName="registry-server" Jan 23 19:05:58 crc kubenswrapper[4760]: I0123 19:05:58.930348 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerName="registry-server" Jan 23 19:05:58 crc kubenswrapper[4760]: E0123 19:05:58.930365 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerName="extract-utilities" Jan 23 19:05:58 crc kubenswrapper[4760]: I0123 19:05:58.930373 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerName="extract-utilities" Jan 23 19:05:58 crc kubenswrapper[4760]: E0123 19:05:58.930391 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerName="extract-content" Jan 23 19:05:58 crc kubenswrapper[4760]: I0123 19:05:58.930399 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerName="extract-content" Jan 23 19:05:58 crc kubenswrapper[4760]: I0123 19:05:58.930636 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="918a2349-f139-4017-bfbd-0c1ba69eb930" containerName="registry-server" Jan 23 19:05:58 crc kubenswrapper[4760]: I0123 19:05:58.945329 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:58 crc kubenswrapper[4760]: I0123 19:05:58.945591 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8587"] Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.014338 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-utilities\") pod \"redhat-operators-t8587\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.014438 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d84s6\" (UniqueName: \"kubernetes.io/projected/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-kube-api-access-d84s6\") pod \"redhat-operators-t8587\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.014592 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-catalog-content\") pod \"redhat-operators-t8587\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.116429 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-catalog-content\") pod \"redhat-operators-t8587\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.116527 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-utilities\") pod \"redhat-operators-t8587\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.116590 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84s6\" (UniqueName: \"kubernetes.io/projected/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-kube-api-access-d84s6\") pod \"redhat-operators-t8587\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.117105 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-catalog-content\") pod \"redhat-operators-t8587\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.117158 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-utilities\") pod \"redhat-operators-t8587\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.137661 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84s6\" (UniqueName: \"kubernetes.io/projected/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-kube-api-access-d84s6\") pod \"redhat-operators-t8587\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.270026 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:05:59 crc kubenswrapper[4760]: I0123 19:05:59.809916 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8587"] Jan 23 19:06:00 crc kubenswrapper[4760]: I0123 19:06:00.784284 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerID="7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d" exitCode=0 Jan 23 19:06:00 crc kubenswrapper[4760]: I0123 19:06:00.784356 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8587" event={"ID":"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1","Type":"ContainerDied","Data":"7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d"} Jan 23 19:06:00 crc kubenswrapper[4760]: I0123 19:06:00.784580 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8587" event={"ID":"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1","Type":"ContainerStarted","Data":"2c7343aa637c84d24081223dc73ea294f270339ae37bce3f4c8d7574a6dc13bc"} Jan 23 19:06:00 crc kubenswrapper[4760]: I0123 19:06:00.786290 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:06:01 crc kubenswrapper[4760]: I0123 19:06:01.795517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8587" event={"ID":"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1","Type":"ContainerStarted","Data":"6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2"} Jan 23 19:06:02 crc kubenswrapper[4760]: I0123 19:06:02.811287 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerID="6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2" exitCode=0 Jan 23 19:06:02 crc kubenswrapper[4760]: I0123 19:06:02.811361 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8587" event={"ID":"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1","Type":"ContainerDied","Data":"6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2"} Jan 23 19:06:03 crc kubenswrapper[4760]: I0123 19:06:03.822779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8587" event={"ID":"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1","Type":"ContainerStarted","Data":"8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9"} Jan 23 19:06:03 crc kubenswrapper[4760]: I0123 19:06:03.845777 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t8587" podStartSLOduration=3.287277093 podStartE2EDuration="5.845760833s" podCreationTimestamp="2026-01-23 19:05:58 +0000 UTC" firstStartedPulling="2026-01-23 19:06:00.786033658 +0000 UTC m=+3903.788491591" lastFinishedPulling="2026-01-23 19:06:03.344517398 +0000 UTC m=+3906.346975331" observedRunningTime="2026-01-23 19:06:03.839862781 +0000 UTC m=+3906.842320704" watchObservedRunningTime="2026-01-23 19:06:03.845760833 +0000 UTC m=+3906.848218766" Jan 23 19:06:06 crc kubenswrapper[4760]: I0123 19:06:06.595900 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:06:06 crc kubenswrapper[4760]: E0123 19:06:06.596783 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:06:08 crc kubenswrapper[4760]: I0123 19:06:08.533714 4760 scope.go:117] "RemoveContainer" containerID="7ce1a19c996f75fc5ef11421c54cc2a46057b1f3f177f8efb7a3f891aecc4ddc" Jan 23 19:06:09 crc kubenswrapper[4760]: I0123 19:06:09.271117 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:06:09 crc kubenswrapper[4760]: I0123 19:06:09.271230 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:06:09 crc kubenswrapper[4760]: I0123 19:06:09.352458 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:06:09 crc kubenswrapper[4760]: I0123 19:06:09.916691 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:06:09 crc kubenswrapper[4760]: I0123 19:06:09.969609 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8587"] Jan 23 19:06:11 crc kubenswrapper[4760]: I0123 19:06:11.880991 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t8587" podUID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerName="registry-server" containerID="cri-o://8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9" gracePeriod=2 Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.382125 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.495732 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d84s6\" (UniqueName: \"kubernetes.io/projected/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-kube-api-access-d84s6\") pod \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.496101 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-catalog-content\") pod \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.496151 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-utilities\") pod \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\" (UID: \"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1\") " Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.497002 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-utilities" (OuterVolumeSpecName: "utilities") pod "aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" (UID: "aa3c5dd3-7ab9-4edc-8e93-609e39e453e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.503666 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-kube-api-access-d84s6" (OuterVolumeSpecName: "kube-api-access-d84s6") pod "aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" (UID: "aa3c5dd3-7ab9-4edc-8e93-609e39e453e1"). InnerVolumeSpecName "kube-api-access-d84s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.598842 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.598875 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d84s6\" (UniqueName: \"kubernetes.io/projected/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-kube-api-access-d84s6\") on node \"crc\" DevicePath \"\"" Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.641888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" (UID: "aa3c5dd3-7ab9-4edc-8e93-609e39e453e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.701247 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.892725 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerID="8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9" exitCode=0 Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.892950 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8587" event={"ID":"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1","Type":"ContainerDied","Data":"8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9"} Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.893031 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8587" Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.893041 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8587" event={"ID":"aa3c5dd3-7ab9-4edc-8e93-609e39e453e1","Type":"ContainerDied","Data":"2c7343aa637c84d24081223dc73ea294f270339ae37bce3f4c8d7574a6dc13bc"} Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.893058 4760 scope.go:117] "RemoveContainer" containerID="8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9" Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.917010 4760 scope.go:117] "RemoveContainer" containerID="6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2" Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.956871 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8587"] Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.963150 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t8587"] Jan 23 19:06:12 crc kubenswrapper[4760]: I0123 19:06:12.964671 4760 scope.go:117] "RemoveContainer" containerID="7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d" Jan 23 19:06:13 crc kubenswrapper[4760]: I0123 19:06:13.008984 4760 scope.go:117] "RemoveContainer" containerID="8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9" Jan 23 19:06:13 crc kubenswrapper[4760]: E0123 19:06:13.020185 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9\": container with ID starting with 8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9 not found: ID does not exist" containerID="8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9" Jan 23 19:06:13 crc kubenswrapper[4760]: I0123 19:06:13.020240 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9"} err="failed to get container status \"8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9\": rpc error: code = NotFound desc = could not find container \"8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9\": container with ID starting with 8b3da379c17f65e0d6b0e944e77a45227ee47a4a5c0b6002166d2c4c21671bb9 not found: ID does not exist" Jan 23 19:06:13 crc kubenswrapper[4760]: I0123 19:06:13.020275 4760 scope.go:117] "RemoveContainer" containerID="6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2" Jan 23 19:06:13 crc kubenswrapper[4760]: E0123 19:06:13.020943 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2\": container with ID starting with 6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2 not found: ID does not exist" containerID="6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2" Jan 23 19:06:13 crc kubenswrapper[4760]: I0123 19:06:13.020973 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2"} err="failed to get container status \"6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2\": rpc error: code = NotFound desc = could not find container \"6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2\": container with ID starting with 6cb5c2927b4f3090a74d8c661b17f8891c8ddba5a8f8aae5d0a4fa29410d47e2 not found: ID does not exist" Jan 23 19:06:13 crc kubenswrapper[4760]: I0123 19:06:13.020993 4760 scope.go:117] "RemoveContainer" containerID="7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d" Jan 23 19:06:13 crc kubenswrapper[4760]: E0123 19:06:13.021206 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d\": container with ID starting with 7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d not found: ID does not exist" containerID="7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d" Jan 23 19:06:13 crc kubenswrapper[4760]: I0123 19:06:13.021244 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d"} err="failed to get container status \"7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d\": rpc error: code = NotFound desc = could not find container \"7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d\": container with ID starting with 7f6257c46a41930e1f34671d75a7e7bbe700f8d4a217548cf9f75e79fbe5502d not found: ID does not exist" Jan 23 19:06:13 crc kubenswrapper[4760]: I0123 19:06:13.608314 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" path="/var/lib/kubelet/pods/aa3c5dd3-7ab9-4edc-8e93-609e39e453e1/volumes" Jan 23 19:06:20 crc kubenswrapper[4760]: I0123 19:06:20.595557 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:06:20 crc kubenswrapper[4760]: E0123 19:06:20.596475 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:06:33 crc kubenswrapper[4760]: I0123 19:06:33.595110 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:06:33 crc kubenswrapper[4760]: E0123 19:06:33.596136 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:06:44 crc kubenswrapper[4760]: I0123 19:06:44.595765 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:06:44 crc kubenswrapper[4760]: E0123 19:06:44.596681 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:06:57 crc kubenswrapper[4760]: I0123 19:06:57.604328 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:06:57 crc kubenswrapper[4760]: E0123 19:06:57.605225 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:07:11 crc kubenswrapper[4760]: I0123 19:07:11.595612 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:07:11 crc kubenswrapper[4760]: E0123 19:07:11.596368 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:07:23 crc kubenswrapper[4760]: I0123 19:07:23.595285 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:07:23 crc kubenswrapper[4760]: E0123 19:07:23.596125 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:07:31 crc kubenswrapper[4760]: I0123 19:07:31.815867 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-srxq4"] Jan 23 19:07:31 crc kubenswrapper[4760]: E0123 19:07:31.816897 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerName="registry-server" Jan 23 19:07:31 crc kubenswrapper[4760]: I0123 19:07:31.816913 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerName="registry-server" Jan 23 19:07:31 crc kubenswrapper[4760]: E0123 19:07:31.816925 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerName="extract-utilities" Jan 23 19:07:31 crc kubenswrapper[4760]: I0123 19:07:31.816934 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerName="extract-utilities" Jan 23 19:07:31 crc kubenswrapper[4760]: E0123 19:07:31.816946 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerName="extract-content" Jan 23 19:07:31 crc kubenswrapper[4760]: I0123 19:07:31.816957 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerName="extract-content" Jan 23 19:07:31 crc kubenswrapper[4760]: I0123 19:07:31.817217 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3c5dd3-7ab9-4edc-8e93-609e39e453e1" containerName="registry-server" Jan 23 19:07:31 crc kubenswrapper[4760]: I0123 19:07:31.819014 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:31 crc kubenswrapper[4760]: I0123 19:07:31.826266 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srxq4"] Jan 23 19:07:31 crc kubenswrapper[4760]: I0123 19:07:31.917448 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-utilities\") pod \"redhat-marketplace-srxq4\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:31 crc kubenswrapper[4760]: I0123 19:07:31.917852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-catalog-content\") pod \"redhat-marketplace-srxq4\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:31 crc kubenswrapper[4760]: I0123 19:07:31.918092 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb57h\" (UniqueName: \"kubernetes.io/projected/a25ce959-1d7e-4925-9329-180472988a02-kube-api-access-hb57h\") pod \"redhat-marketplace-srxq4\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:32 crc kubenswrapper[4760]: I0123 19:07:32.019670 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb57h\" (UniqueName: \"kubernetes.io/projected/a25ce959-1d7e-4925-9329-180472988a02-kube-api-access-hb57h\") pod \"redhat-marketplace-srxq4\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:32 crc kubenswrapper[4760]: I0123 19:07:32.019824 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-utilities\") pod \"redhat-marketplace-srxq4\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:32 crc kubenswrapper[4760]: I0123 19:07:32.019856 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-catalog-content\") pod \"redhat-marketplace-srxq4\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:32 crc kubenswrapper[4760]: I0123 19:07:32.020465 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-catalog-content\") pod \"redhat-marketplace-srxq4\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:32 crc kubenswrapper[4760]: I0123 19:07:32.021299 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-utilities\") pod \"redhat-marketplace-srxq4\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:32 crc kubenswrapper[4760]: I0123 19:07:32.039838 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb57h\" (UniqueName: \"kubernetes.io/projected/a25ce959-1d7e-4925-9329-180472988a02-kube-api-access-hb57h\") pod \"redhat-marketplace-srxq4\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:32 crc kubenswrapper[4760]: I0123 19:07:32.159448 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:32 crc kubenswrapper[4760]: I0123 19:07:32.675172 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srxq4"] Jan 23 19:07:33 crc kubenswrapper[4760]: I0123 19:07:33.583602 4760 generic.go:334] "Generic (PLEG): container finished" podID="a25ce959-1d7e-4925-9329-180472988a02" containerID="46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991" exitCode=0 Jan 23 19:07:33 crc kubenswrapper[4760]: I0123 19:07:33.583678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srxq4" event={"ID":"a25ce959-1d7e-4925-9329-180472988a02","Type":"ContainerDied","Data":"46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991"} Jan 23 19:07:33 crc kubenswrapper[4760]: I0123 19:07:33.584253 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srxq4" event={"ID":"a25ce959-1d7e-4925-9329-180472988a02","Type":"ContainerStarted","Data":"be2fc5567782915f128b017d933779a08f83acb94cb880460c2b5b07fe2f0f63"} Jan 23 19:07:35 crc kubenswrapper[4760]: I0123 19:07:35.608603 4760 generic.go:334] "Generic (PLEG): container finished" podID="a25ce959-1d7e-4925-9329-180472988a02" containerID="1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3" exitCode=0 Jan 23 19:07:35 crc kubenswrapper[4760]: I0123 19:07:35.608696 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srxq4" event={"ID":"a25ce959-1d7e-4925-9329-180472988a02","Type":"ContainerDied","Data":"1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3"} Jan 23 19:07:36 crc kubenswrapper[4760]: I0123 19:07:36.619808 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srxq4" event={"ID":"a25ce959-1d7e-4925-9329-180472988a02","Type":"ContainerStarted","Data":"81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5"} Jan 23 19:07:36 crc kubenswrapper[4760]: I0123 19:07:36.644574 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-srxq4" podStartSLOduration=3.094497499 podStartE2EDuration="5.644552638s" podCreationTimestamp="2026-01-23 19:07:31 +0000 UTC" firstStartedPulling="2026-01-23 19:07:33.585670885 +0000 UTC m=+3996.588128818" lastFinishedPulling="2026-01-23 19:07:36.135726034 +0000 UTC m=+3999.138183957" observedRunningTime="2026-01-23 19:07:36.641026791 +0000 UTC m=+3999.643484724" watchObservedRunningTime="2026-01-23 19:07:36.644552638 +0000 UTC m=+3999.647010591" Jan 23 19:07:37 crc kubenswrapper[4760]: I0123 19:07:37.603971 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:07:37 crc kubenswrapper[4760]: E0123 19:07:37.604295 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:07:42 crc kubenswrapper[4760]: I0123 19:07:42.160359 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:42 crc kubenswrapper[4760]: I0123 19:07:42.160946 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:42 crc kubenswrapper[4760]: I0123 19:07:42.212519 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:42 crc kubenswrapper[4760]: I0123 19:07:42.713467 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:42 crc kubenswrapper[4760]: I0123 19:07:42.764709 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srxq4"] Jan 23 19:07:44 crc kubenswrapper[4760]: I0123 19:07:44.681955 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-srxq4" podUID="a25ce959-1d7e-4925-9329-180472988a02" containerName="registry-server" containerID="cri-o://81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5" gracePeriod=2 Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.592226 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.694980 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srxq4" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.695032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srxq4" event={"ID":"a25ce959-1d7e-4925-9329-180472988a02","Type":"ContainerDied","Data":"81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5"} Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.695091 4760 scope.go:117] "RemoveContainer" containerID="81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.695238 4760 generic.go:334] "Generic (PLEG): container finished" podID="a25ce959-1d7e-4925-9329-180472988a02" containerID="81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5" exitCode=0 Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.695311 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srxq4" event={"ID":"a25ce959-1d7e-4925-9329-180472988a02","Type":"ContainerDied","Data":"be2fc5567782915f128b017d933779a08f83acb94cb880460c2b5b07fe2f0f63"} Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.695313 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-utilities\") pod \"a25ce959-1d7e-4925-9329-180472988a02\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.695488 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-catalog-content\") pod \"a25ce959-1d7e-4925-9329-180472988a02\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.695595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb57h\" (UniqueName: \"kubernetes.io/projected/a25ce959-1d7e-4925-9329-180472988a02-kube-api-access-hb57h\") pod \"a25ce959-1d7e-4925-9329-180472988a02\" (UID: \"a25ce959-1d7e-4925-9329-180472988a02\") " Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.696381 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-utilities" (OuterVolumeSpecName: "utilities") pod "a25ce959-1d7e-4925-9329-180472988a02" (UID: "a25ce959-1d7e-4925-9329-180472988a02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.723199 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a25ce959-1d7e-4925-9329-180472988a02" (UID: "a25ce959-1d7e-4925-9329-180472988a02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.777339 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25ce959-1d7e-4925-9329-180472988a02-kube-api-access-hb57h" (OuterVolumeSpecName: "kube-api-access-hb57h") pod "a25ce959-1d7e-4925-9329-180472988a02" (UID: "a25ce959-1d7e-4925-9329-180472988a02"). InnerVolumeSpecName "kube-api-access-hb57h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.797969 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.798017 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25ce959-1d7e-4925-9329-180472988a02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.798033 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb57h\" (UniqueName: \"kubernetes.io/projected/a25ce959-1d7e-4925-9329-180472988a02-kube-api-access-hb57h\") on node \"crc\" DevicePath \"\"" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.836637 4760 scope.go:117] "RemoveContainer" containerID="1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.856891 4760 scope.go:117] "RemoveContainer" containerID="46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.908598 4760 scope.go:117] "RemoveContainer" containerID="81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5" Jan 23 19:07:45 crc kubenswrapper[4760]: E0123 19:07:45.908963 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5\": container with ID starting with 81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5 not found: ID does not exist" containerID="81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.909041 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5"} err="failed to get container status \"81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5\": rpc error: code = NotFound desc = could not find container \"81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5\": container with ID starting with 81e254f626af88d65b6e6d9fe036ac77f27e5e03ac1a24a937cd1471ce0ddad5 not found: ID does not exist" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.909062 4760 scope.go:117] "RemoveContainer" containerID="1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3" Jan 23 19:07:45 crc kubenswrapper[4760]: E0123 19:07:45.909260 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3\": container with ID starting with 1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3 not found: ID does not exist" containerID="1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.909288 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3"} err="failed to get container status \"1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3\": rpc error: code = NotFound desc = could not find container \"1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3\": container with ID starting with 1b50de0f5d7dd33d5786401f4a87611b9c4b6203e3a56fbf4d2a7bdff6bfb8d3 not found: ID does not exist" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.909306 4760 scope.go:117] "RemoveContainer" containerID="46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991" Jan 23 19:07:45 crc kubenswrapper[4760]: E0123 19:07:45.909559 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991\": container with ID starting with 46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991 not found: ID does not exist" containerID="46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991" Jan 23 19:07:45 crc kubenswrapper[4760]: I0123 19:07:45.909587 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991"} err="failed to get container status \"46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991\": rpc error: code = NotFound desc = could not find container \"46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991\": container with ID starting with 46968dfe7e12bf3bc2c3c27ee457db642c6fc051854320ec1c377ad48e9ca991 not found: ID does not exist" Jan 23 19:07:46 crc kubenswrapper[4760]: I0123 19:07:46.029695 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srxq4"] Jan 23 19:07:46 crc kubenswrapper[4760]: I0123 19:07:46.038719 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-srxq4"] Jan 23 19:07:47 crc kubenswrapper[4760]: I0123 19:07:47.607939 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25ce959-1d7e-4925-9329-180472988a02" path="/var/lib/kubelet/pods/a25ce959-1d7e-4925-9329-180472988a02/volumes" Jan 23 19:07:50 crc kubenswrapper[4760]: I0123 19:07:50.595423 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:07:51 crc kubenswrapper[4760]: I0123 19:07:51.748019 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"ac5442c294bd54a0f78f09184ffcc4d860f0094ba9d19bb5a0133de731eb1c04"} Jan 23 19:10:00 crc kubenswrapper[4760]: I0123 19:10:00.800179 4760 generic.go:334] "Generic (PLEG): container finished" podID="6b58d560-8084-471f-a385-c36ce2d28bd8" containerID="71dc049cb52a15c923f9de9e5014f4db0ebec00e41ea073d44b4dca135b3cf55" exitCode=0 Jan 23 19:10:00 crc kubenswrapper[4760]: I0123 19:10:00.800284 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6b58d560-8084-471f-a385-c36ce2d28bd8","Type":"ContainerDied","Data":"71dc049cb52a15c923f9de9e5014f4db0ebec00e41ea073d44b4dca135b3cf55"} Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.172609 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.329726 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ssh-key\") pod \"6b58d560-8084-471f-a385-c36ce2d28bd8\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.329794 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ca-certs\") pod \"6b58d560-8084-471f-a385-c36ce2d28bd8\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.329819 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config\") pod \"6b58d560-8084-471f-a385-c36ce2d28bd8\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.329887 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config-secret\") pod \"6b58d560-8084-471f-a385-c36ce2d28bd8\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.329981 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"6b58d560-8084-471f-a385-c36ce2d28bd8\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.330019 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmftz\" (UniqueName: \"kubernetes.io/projected/6b58d560-8084-471f-a385-c36ce2d28bd8-kube-api-access-gmftz\") pod \"6b58d560-8084-471f-a385-c36ce2d28bd8\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.330044 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-workdir\") pod \"6b58d560-8084-471f-a385-c36ce2d28bd8\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.330069 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-config-data\") pod \"6b58d560-8084-471f-a385-c36ce2d28bd8\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.330163 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-temporary\") pod \"6b58d560-8084-471f-a385-c36ce2d28bd8\" (UID: \"6b58d560-8084-471f-a385-c36ce2d28bd8\") " Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.331461 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-config-data" (OuterVolumeSpecName: "config-data") pod "6b58d560-8084-471f-a385-c36ce2d28bd8" (UID: "6b58d560-8084-471f-a385-c36ce2d28bd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.332255 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6b58d560-8084-471f-a385-c36ce2d28bd8" (UID: "6b58d560-8084-471f-a385-c36ce2d28bd8"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.334969 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6b58d560-8084-471f-a385-c36ce2d28bd8" (UID: "6b58d560-8084-471f-a385-c36ce2d28bd8"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.339153 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b58d560-8084-471f-a385-c36ce2d28bd8-kube-api-access-gmftz" (OuterVolumeSpecName: "kube-api-access-gmftz") pod "6b58d560-8084-471f-a385-c36ce2d28bd8" (UID: "6b58d560-8084-471f-a385-c36ce2d28bd8"). InnerVolumeSpecName "kube-api-access-gmftz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.339217 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6b58d560-8084-471f-a385-c36ce2d28bd8" (UID: "6b58d560-8084-471f-a385-c36ce2d28bd8"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.364548 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6b58d560-8084-471f-a385-c36ce2d28bd8" (UID: "6b58d560-8084-471f-a385-c36ce2d28bd8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.365550 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6b58d560-8084-471f-a385-c36ce2d28bd8" (UID: "6b58d560-8084-471f-a385-c36ce2d28bd8"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.367201 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6b58d560-8084-471f-a385-c36ce2d28bd8" (UID: "6b58d560-8084-471f-a385-c36ce2d28bd8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.384664 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6b58d560-8084-471f-a385-c36ce2d28bd8" (UID: "6b58d560-8084-471f-a385-c36ce2d28bd8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.432950 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.432989 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.433000 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6b58d560-8084-471f-a385-c36ce2d28bd8-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.433012 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.433020 4760 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.433030 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.433039 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b58d560-8084-471f-a385-c36ce2d28bd8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.433069 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.433077 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmftz\" (UniqueName: \"kubernetes.io/projected/6b58d560-8084-471f-a385-c36ce2d28bd8-kube-api-access-gmftz\") on node \"crc\" DevicePath \"\"" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.452176 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.535457 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.818090 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6b58d560-8084-471f-a385-c36ce2d28bd8","Type":"ContainerDied","Data":"e5d6fc62d8decc361548fef21c0bab807b6ebe45f5316bbda8a741f66a155446"} Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.818133 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d6fc62d8decc361548fef21c0bab807b6ebe45f5316bbda8a741f66a155446" Jan 23 19:10:02 crc kubenswrapper[4760]: I0123 19:10:02.818385 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.278244 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 19:10:11 crc kubenswrapper[4760]: E0123 19:10:11.279501 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b58d560-8084-471f-a385-c36ce2d28bd8" containerName="tempest-tests-tempest-tests-runner" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.279523 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b58d560-8084-471f-a385-c36ce2d28bd8" containerName="tempest-tests-tempest-tests-runner" Jan 23 19:10:11 crc kubenswrapper[4760]: E0123 19:10:11.279549 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25ce959-1d7e-4925-9329-180472988a02" containerName="registry-server" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.279562 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25ce959-1d7e-4925-9329-180472988a02" containerName="registry-server" Jan 23 19:10:11 crc kubenswrapper[4760]: E0123 19:10:11.279620 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25ce959-1d7e-4925-9329-180472988a02" containerName="extract-utilities" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.279638 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25ce959-1d7e-4925-9329-180472988a02" containerName="extract-utilities" Jan 23 19:10:11 crc kubenswrapper[4760]: E0123 19:10:11.279710 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25ce959-1d7e-4925-9329-180472988a02" containerName="extract-content" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.279727 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25ce959-1d7e-4925-9329-180472988a02" containerName="extract-content" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.280172 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25ce959-1d7e-4925-9329-180472988a02" containerName="registry-server" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.280213 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b58d560-8084-471f-a385-c36ce2d28bd8" containerName="tempest-tests-tempest-tests-runner" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.281812 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.284129 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xkghz" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.303985 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.416502 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"731cb86c-5b1c-4f47-843a-bd70bc4656d3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.416609 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d2j7\" (UniqueName: \"kubernetes.io/projected/731cb86c-5b1c-4f47-843a-bd70bc4656d3-kube-api-access-6d2j7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"731cb86c-5b1c-4f47-843a-bd70bc4656d3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.518613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"731cb86c-5b1c-4f47-843a-bd70bc4656d3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.518676 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d2j7\" (UniqueName: \"kubernetes.io/projected/731cb86c-5b1c-4f47-843a-bd70bc4656d3-kube-api-access-6d2j7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"731cb86c-5b1c-4f47-843a-bd70bc4656d3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.519385 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"731cb86c-5b1c-4f47-843a-bd70bc4656d3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.775938 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d2j7\" (UniqueName: \"kubernetes.io/projected/731cb86c-5b1c-4f47-843a-bd70bc4656d3-kube-api-access-6d2j7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"731cb86c-5b1c-4f47-843a-bd70bc4656d3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.800907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"731cb86c-5b1c-4f47-843a-bd70bc4656d3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:10:11 crc kubenswrapper[4760]: I0123 19:10:11.920892 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:10:12 crc kubenswrapper[4760]: I0123 19:10:12.401913 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 19:10:12 crc kubenswrapper[4760]: I0123 19:10:12.924938 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"731cb86c-5b1c-4f47-843a-bd70bc4656d3","Type":"ContainerStarted","Data":"1f6e1ced17c81b21b92b9dfb9ed7e5bdbe7c57ed9abcdc3a318ff7aaeb828bbe"} Jan 23 19:10:14 crc kubenswrapper[4760]: I0123 19:10:14.942109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"731cb86c-5b1c-4f47-843a-bd70bc4656d3","Type":"ContainerStarted","Data":"466550ec78f3aa99e209c4f542f74d2b05539c02b01af2078026bb9f5a763167"} Jan 23 19:10:14 crc kubenswrapper[4760]: I0123 19:10:14.978913 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.939286538 podStartE2EDuration="3.978886765s" podCreationTimestamp="2026-01-23 19:10:11 +0000 UTC" firstStartedPulling="2026-01-23 19:10:12.41474092 +0000 UTC m=+4155.417198883" lastFinishedPulling="2026-01-23 19:10:14.454341177 +0000 UTC m=+4157.456799110" observedRunningTime="2026-01-23 19:10:14.970194286 +0000 UTC m=+4157.972652219" watchObservedRunningTime="2026-01-23 19:10:14.978886765 +0000 UTC m=+4157.981344698" Jan 23 19:10:16 crc kubenswrapper[4760]: I0123 19:10:16.075840 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:10:16 crc kubenswrapper[4760]: I0123 19:10:16.076239 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.541125 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hq4g9/must-gather-f7spt"] Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.543168 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/must-gather-f7spt" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.544899 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hq4g9"/"openshift-service-ca.crt" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.545118 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hq4g9"/"kube-root-ca.crt" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.545257 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hq4g9"/"default-dockercfg-sn8pf" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.554865 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hq4g9/must-gather-f7spt"] Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.694498 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/72aaafbc-91a2-433a-b56c-15c2e96731ee-must-gather-output\") pod \"must-gather-f7spt\" (UID: \"72aaafbc-91a2-433a-b56c-15c2e96731ee\") " pod="openshift-must-gather-hq4g9/must-gather-f7spt" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.694565 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vww\" (UniqueName: \"kubernetes.io/projected/72aaafbc-91a2-433a-b56c-15c2e96731ee-kube-api-access-j9vww\") pod \"must-gather-f7spt\" (UID: \"72aaafbc-91a2-433a-b56c-15c2e96731ee\") " pod="openshift-must-gather-hq4g9/must-gather-f7spt" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.798558 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/72aaafbc-91a2-433a-b56c-15c2e96731ee-must-gather-output\") pod \"must-gather-f7spt\" (UID: \"72aaafbc-91a2-433a-b56c-15c2e96731ee\") " pod="openshift-must-gather-hq4g9/must-gather-f7spt" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.798643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9vww\" (UniqueName: \"kubernetes.io/projected/72aaafbc-91a2-433a-b56c-15c2e96731ee-kube-api-access-j9vww\") pod \"must-gather-f7spt\" (UID: \"72aaafbc-91a2-433a-b56c-15c2e96731ee\") " pod="openshift-must-gather-hq4g9/must-gather-f7spt" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.799181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/72aaafbc-91a2-433a-b56c-15c2e96731ee-must-gather-output\") pod \"must-gather-f7spt\" (UID: \"72aaafbc-91a2-433a-b56c-15c2e96731ee\") " pod="openshift-must-gather-hq4g9/must-gather-f7spt" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.830146 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9vww\" (UniqueName: \"kubernetes.io/projected/72aaafbc-91a2-433a-b56c-15c2e96731ee-kube-api-access-j9vww\") pod \"must-gather-f7spt\" (UID: \"72aaafbc-91a2-433a-b56c-15c2e96731ee\") " pod="openshift-must-gather-hq4g9/must-gather-f7spt" Jan 23 19:10:39 crc kubenswrapper[4760]: I0123 19:10:39.862781 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/must-gather-f7spt" Jan 23 19:10:40 crc kubenswrapper[4760]: I0123 19:10:40.292578 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hq4g9/must-gather-f7spt"] Jan 23 19:10:41 crc kubenswrapper[4760]: I0123 19:10:41.214008 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/must-gather-f7spt" event={"ID":"72aaafbc-91a2-433a-b56c-15c2e96731ee","Type":"ContainerStarted","Data":"29dc00003e1aeaa81369519e5862e88840c97efa4cfb717552b7bf2ae14ee927"} Jan 23 19:10:46 crc kubenswrapper[4760]: I0123 19:10:46.075664 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:10:46 crc kubenswrapper[4760]: I0123 19:10:46.076245 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:10:47 crc kubenswrapper[4760]: I0123 19:10:47.272147 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/must-gather-f7spt" event={"ID":"72aaafbc-91a2-433a-b56c-15c2e96731ee","Type":"ContainerStarted","Data":"3060f6af6ef1ba16199a60ce6c86536b8f63532d9a7e1f0f13405c1536828b54"} Jan 23 19:10:47 crc kubenswrapper[4760]: I0123 19:10:47.272517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/must-gather-f7spt" event={"ID":"72aaafbc-91a2-433a-b56c-15c2e96731ee","Type":"ContainerStarted","Data":"bbf4ae6ed6af773b60503ca375674178ee2553473f08f3295a1f12a9d09362c3"} Jan 23 19:10:47 crc kubenswrapper[4760]: I0123 19:10:47.297359 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hq4g9/must-gather-f7spt" podStartSLOduration=1.842566827 podStartE2EDuration="8.297339748s" podCreationTimestamp="2026-01-23 19:10:39 +0000 UTC" firstStartedPulling="2026-01-23 19:10:40.31081173 +0000 UTC m=+4183.313269663" lastFinishedPulling="2026-01-23 19:10:46.765584651 +0000 UTC m=+4189.768042584" observedRunningTime="2026-01-23 19:10:47.288157245 +0000 UTC m=+4190.290615188" watchObservedRunningTime="2026-01-23 19:10:47.297339748 +0000 UTC m=+4190.299797681" Jan 23 19:10:51 crc kubenswrapper[4760]: I0123 19:10:51.418773 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hq4g9/crc-debug-wflrj"] Jan 23 19:10:51 crc kubenswrapper[4760]: I0123 19:10:51.420607 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-wflrj" Jan 23 19:10:51 crc kubenswrapper[4760]: I0123 19:10:51.506210 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzc89\" (UniqueName: \"kubernetes.io/projected/a56496c1-073f-477b-9098-8db5fdb64c19-kube-api-access-gzc89\") pod \"crc-debug-wflrj\" (UID: \"a56496c1-073f-477b-9098-8db5fdb64c19\") " pod="openshift-must-gather-hq4g9/crc-debug-wflrj" Jan 23 19:10:51 crc kubenswrapper[4760]: I0123 19:10:51.506341 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a56496c1-073f-477b-9098-8db5fdb64c19-host\") pod \"crc-debug-wflrj\" (UID: \"a56496c1-073f-477b-9098-8db5fdb64c19\") " pod="openshift-must-gather-hq4g9/crc-debug-wflrj" Jan 23 19:10:51 crc kubenswrapper[4760]: I0123 19:10:51.608975 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzc89\" (UniqueName: \"kubernetes.io/projected/a56496c1-073f-477b-9098-8db5fdb64c19-kube-api-access-gzc89\") pod \"crc-debug-wflrj\" (UID: \"a56496c1-073f-477b-9098-8db5fdb64c19\") " pod="openshift-must-gather-hq4g9/crc-debug-wflrj" Jan 23 19:10:51 crc kubenswrapper[4760]: I0123 19:10:51.609334 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a56496c1-073f-477b-9098-8db5fdb64c19-host\") pod \"crc-debug-wflrj\" (UID: \"a56496c1-073f-477b-9098-8db5fdb64c19\") " pod="openshift-must-gather-hq4g9/crc-debug-wflrj" Jan 23 19:10:51 crc kubenswrapper[4760]: I0123 19:10:51.609490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a56496c1-073f-477b-9098-8db5fdb64c19-host\") pod \"crc-debug-wflrj\" (UID: \"a56496c1-073f-477b-9098-8db5fdb64c19\") " pod="openshift-must-gather-hq4g9/crc-debug-wflrj" Jan 23 19:10:51 crc kubenswrapper[4760]: I0123 19:10:51.976828 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzc89\" (UniqueName: \"kubernetes.io/projected/a56496c1-073f-477b-9098-8db5fdb64c19-kube-api-access-gzc89\") pod \"crc-debug-wflrj\" (UID: \"a56496c1-073f-477b-9098-8db5fdb64c19\") " pod="openshift-must-gather-hq4g9/crc-debug-wflrj" Jan 23 19:10:52 crc kubenswrapper[4760]: I0123 19:10:52.059171 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-wflrj" Jan 23 19:10:52 crc kubenswrapper[4760]: W0123 19:10:52.111469 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda56496c1_073f_477b_9098_8db5fdb64c19.slice/crio-76d8c8fb17ed3d6159632fa4d851c0b208cf476350cfe748d77a978c260ee502 WatchSource:0}: Error finding container 76d8c8fb17ed3d6159632fa4d851c0b208cf476350cfe748d77a978c260ee502: Status 404 returned error can't find the container with id 76d8c8fb17ed3d6159632fa4d851c0b208cf476350cfe748d77a978c260ee502 Jan 23 19:10:52 crc kubenswrapper[4760]: I0123 19:10:52.322473 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/crc-debug-wflrj" event={"ID":"a56496c1-073f-477b-9098-8db5fdb64c19","Type":"ContainerStarted","Data":"76d8c8fb17ed3d6159632fa4d851c0b208cf476350cfe748d77a978c260ee502"} Jan 23 19:11:03 crc kubenswrapper[4760]: I0123 19:11:03.434014 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/crc-debug-wflrj" event={"ID":"a56496c1-073f-477b-9098-8db5fdb64c19","Type":"ContainerStarted","Data":"f5094091fbf40aa0d6ca656324cc4a41262bbed6da90859d8b34d38824446ef0"} Jan 23 19:11:03 crc kubenswrapper[4760]: I0123 19:11:03.452331 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hq4g9/crc-debug-wflrj" podStartSLOduration=1.925589891 podStartE2EDuration="12.452309738s" podCreationTimestamp="2026-01-23 19:10:51 +0000 UTC" firstStartedPulling="2026-01-23 19:10:52.115054834 +0000 UTC m=+4195.117512767" lastFinishedPulling="2026-01-23 19:11:02.641774671 +0000 UTC m=+4205.644232614" observedRunningTime="2026-01-23 19:11:03.450531079 +0000 UTC m=+4206.452989032" watchObservedRunningTime="2026-01-23 19:11:03.452309738 +0000 UTC m=+4206.454767671" Jan 23 19:11:16 crc kubenswrapper[4760]: I0123 19:11:16.075591 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:11:16 crc kubenswrapper[4760]: I0123 19:11:16.076105 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:11:16 crc kubenswrapper[4760]: I0123 19:11:16.076162 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 19:11:16 crc kubenswrapper[4760]: I0123 19:11:16.076938 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac5442c294bd54a0f78f09184ffcc4d860f0094ba9d19bb5a0133de731eb1c04"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:11:16 crc kubenswrapper[4760]: I0123 19:11:16.076987 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://ac5442c294bd54a0f78f09184ffcc4d860f0094ba9d19bb5a0133de731eb1c04" gracePeriod=600 Jan 23 19:11:16 crc kubenswrapper[4760]: I0123 19:11:16.562698 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="ac5442c294bd54a0f78f09184ffcc4d860f0094ba9d19bb5a0133de731eb1c04" exitCode=0 Jan 23 19:11:16 crc kubenswrapper[4760]: I0123 19:11:16.562802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"ac5442c294bd54a0f78f09184ffcc4d860f0094ba9d19bb5a0133de731eb1c04"} Jan 23 19:11:16 crc kubenswrapper[4760]: I0123 19:11:16.563310 4760 scope.go:117] "RemoveContainer" containerID="0c16ccffb31a5fa2e091e857bed9f13c5361dd0ccb7fe55a752d4c40ca5a098a" Jan 23 19:11:17 crc kubenswrapper[4760]: I0123 19:11:17.581074 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d"} Jan 23 19:11:51 crc kubenswrapper[4760]: I0123 19:11:51.023787 4760 generic.go:334] "Generic (PLEG): container finished" podID="a56496c1-073f-477b-9098-8db5fdb64c19" containerID="f5094091fbf40aa0d6ca656324cc4a41262bbed6da90859d8b34d38824446ef0" exitCode=0 Jan 23 19:11:51 crc kubenswrapper[4760]: I0123 19:11:51.023863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/crc-debug-wflrj" event={"ID":"a56496c1-073f-477b-9098-8db5fdb64c19","Type":"ContainerDied","Data":"f5094091fbf40aa0d6ca656324cc4a41262bbed6da90859d8b34d38824446ef0"} Jan 23 19:11:52 crc kubenswrapper[4760]: I0123 19:11:52.131305 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-wflrj" Jan 23 19:11:52 crc kubenswrapper[4760]: I0123 19:11:52.161729 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hq4g9/crc-debug-wflrj"] Jan 23 19:11:52 crc kubenswrapper[4760]: I0123 19:11:52.169188 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hq4g9/crc-debug-wflrj"] Jan 23 19:11:52 crc kubenswrapper[4760]: I0123 19:11:52.323599 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a56496c1-073f-477b-9098-8db5fdb64c19-host\") pod \"a56496c1-073f-477b-9098-8db5fdb64c19\" (UID: \"a56496c1-073f-477b-9098-8db5fdb64c19\") " Jan 23 19:11:52 crc kubenswrapper[4760]: I0123 19:11:52.323801 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzc89\" (UniqueName: \"kubernetes.io/projected/a56496c1-073f-477b-9098-8db5fdb64c19-kube-api-access-gzc89\") pod \"a56496c1-073f-477b-9098-8db5fdb64c19\" (UID: \"a56496c1-073f-477b-9098-8db5fdb64c19\") " Jan 23 19:11:52 crc kubenswrapper[4760]: I0123 19:11:52.323798 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56496c1-073f-477b-9098-8db5fdb64c19-host" (OuterVolumeSpecName: "host") pod "a56496c1-073f-477b-9098-8db5fdb64c19" (UID: "a56496c1-073f-477b-9098-8db5fdb64c19"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:11:52 crc kubenswrapper[4760]: I0123 19:11:52.324475 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a56496c1-073f-477b-9098-8db5fdb64c19-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:11:52 crc kubenswrapper[4760]: I0123 19:11:52.329480 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56496c1-073f-477b-9098-8db5fdb64c19-kube-api-access-gzc89" (OuterVolumeSpecName: "kube-api-access-gzc89") pod "a56496c1-073f-477b-9098-8db5fdb64c19" (UID: "a56496c1-073f-477b-9098-8db5fdb64c19"). InnerVolumeSpecName "kube-api-access-gzc89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:11:52 crc kubenswrapper[4760]: I0123 19:11:52.426988 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzc89\" (UniqueName: \"kubernetes.io/projected/a56496c1-073f-477b-9098-8db5fdb64c19-kube-api-access-gzc89\") on node \"crc\" DevicePath \"\"" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.042099 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d8c8fb17ed3d6159632fa4d851c0b208cf476350cfe748d77a978c260ee502" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.042398 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-wflrj" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.338682 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hq4g9/crc-debug-rs22r"] Jan 23 19:11:53 crc kubenswrapper[4760]: E0123 19:11:53.339160 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56496c1-073f-477b-9098-8db5fdb64c19" containerName="container-00" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.339174 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56496c1-073f-477b-9098-8db5fdb64c19" containerName="container-00" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.339427 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56496c1-073f-477b-9098-8db5fdb64c19" containerName="container-00" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.340119 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-rs22r" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.349046 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-host\") pod \"crc-debug-rs22r\" (UID: \"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd\") " pod="openshift-must-gather-hq4g9/crc-debug-rs22r" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.349134 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfdth\" (UniqueName: \"kubernetes.io/projected/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-kube-api-access-hfdth\") pod \"crc-debug-rs22r\" (UID: \"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd\") " pod="openshift-must-gather-hq4g9/crc-debug-rs22r" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.452153 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-host\") pod \"crc-debug-rs22r\" (UID: \"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd\") " pod="openshift-must-gather-hq4g9/crc-debug-rs22r" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.452251 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfdth\" (UniqueName: \"kubernetes.io/projected/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-kube-api-access-hfdth\") pod \"crc-debug-rs22r\" (UID: \"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd\") " pod="openshift-must-gather-hq4g9/crc-debug-rs22r" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.452623 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-host\") pod \"crc-debug-rs22r\" (UID: \"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd\") " pod="openshift-must-gather-hq4g9/crc-debug-rs22r" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.580270 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfdth\" (UniqueName: \"kubernetes.io/projected/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-kube-api-access-hfdth\") pod \"crc-debug-rs22r\" (UID: \"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd\") " pod="openshift-must-gather-hq4g9/crc-debug-rs22r" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.605944 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a56496c1-073f-477b-9098-8db5fdb64c19" path="/var/lib/kubelet/pods/a56496c1-073f-477b-9098-8db5fdb64c19/volumes" Jan 23 19:11:53 crc kubenswrapper[4760]: I0123 19:11:53.678040 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-rs22r" Jan 23 19:11:54 crc kubenswrapper[4760]: I0123 19:11:54.052334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/crc-debug-rs22r" event={"ID":"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd","Type":"ContainerStarted","Data":"19f20280b04c1431de8e943ad7868f1b09da22738fcbc7c1cd13b8f9d11e1449"} Jan 23 19:11:54 crc kubenswrapper[4760]: I0123 19:11:54.052883 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/crc-debug-rs22r" event={"ID":"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd","Type":"ContainerStarted","Data":"7df80931257dff282a20080d27189dd5c8d1ab0f040246d68597d29b936694f9"} Jan 23 19:11:54 crc kubenswrapper[4760]: I0123 19:11:54.075348 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hq4g9/crc-debug-rs22r" podStartSLOduration=1.075327669 podStartE2EDuration="1.075327669s" podCreationTimestamp="2026-01-23 19:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:11:54.064719037 +0000 UTC m=+4257.067176970" watchObservedRunningTime="2026-01-23 19:11:54.075327669 +0000 UTC m=+4257.077785602" Jan 23 19:11:55 crc kubenswrapper[4760]: I0123 19:11:55.071470 4760 generic.go:334] "Generic (PLEG): container finished" podID="69b3204d-2a1b-4ec7-b05d-5b24344a0dbd" containerID="19f20280b04c1431de8e943ad7868f1b09da22738fcbc7c1cd13b8f9d11e1449" exitCode=0 Jan 23 19:11:55 crc kubenswrapper[4760]: I0123 19:11:55.071540 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/crc-debug-rs22r" event={"ID":"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd","Type":"ContainerDied","Data":"19f20280b04c1431de8e943ad7868f1b09da22738fcbc7c1cd13b8f9d11e1449"} Jan 23 19:11:56 crc kubenswrapper[4760]: I0123 19:11:56.181006 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-rs22r" Jan 23 19:11:56 crc kubenswrapper[4760]: I0123 19:11:56.219528 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfdth\" (UniqueName: \"kubernetes.io/projected/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-kube-api-access-hfdth\") pod \"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd\" (UID: \"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd\") " Jan 23 19:11:56 crc kubenswrapper[4760]: I0123 19:11:56.219831 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-host\") pod \"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd\" (UID: \"69b3204d-2a1b-4ec7-b05d-5b24344a0dbd\") " Jan 23 19:11:56 crc kubenswrapper[4760]: I0123 19:11:56.219927 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-host" (OuterVolumeSpecName: "host") pod "69b3204d-2a1b-4ec7-b05d-5b24344a0dbd" (UID: "69b3204d-2a1b-4ec7-b05d-5b24344a0dbd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:11:56 crc kubenswrapper[4760]: I0123 19:11:56.220255 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:11:56 crc kubenswrapper[4760]: I0123 19:11:56.225280 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-kube-api-access-hfdth" (OuterVolumeSpecName: "kube-api-access-hfdth") pod "69b3204d-2a1b-4ec7-b05d-5b24344a0dbd" (UID: "69b3204d-2a1b-4ec7-b05d-5b24344a0dbd"). InnerVolumeSpecName "kube-api-access-hfdth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:11:56 crc kubenswrapper[4760]: I0123 19:11:56.315128 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hq4g9/crc-debug-rs22r"] Jan 23 19:11:56 crc kubenswrapper[4760]: I0123 19:11:56.322224 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfdth\" (UniqueName: \"kubernetes.io/projected/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd-kube-api-access-hfdth\") on node \"crc\" DevicePath \"\"" Jan 23 19:11:56 crc kubenswrapper[4760]: I0123 19:11:56.327903 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hq4g9/crc-debug-rs22r"] Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.090267 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7df80931257dff282a20080d27189dd5c8d1ab0f040246d68597d29b936694f9" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.090300 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-rs22r" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.485391 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hq4g9/crc-debug-zbh7z"] Jan 23 19:11:57 crc kubenswrapper[4760]: E0123 19:11:57.485882 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b3204d-2a1b-4ec7-b05d-5b24344a0dbd" containerName="container-00" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.485900 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b3204d-2a1b-4ec7-b05d-5b24344a0dbd" containerName="container-00" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.486121 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b3204d-2a1b-4ec7-b05d-5b24344a0dbd" containerName="container-00" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.487084 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.544521 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkp7m\" (UniqueName: \"kubernetes.io/projected/be2b4140-056b-4da0-89db-9dbd386ad722-kube-api-access-bkp7m\") pod \"crc-debug-zbh7z\" (UID: \"be2b4140-056b-4da0-89db-9dbd386ad722\") " pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.544718 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be2b4140-056b-4da0-89db-9dbd386ad722-host\") pod \"crc-debug-zbh7z\" (UID: \"be2b4140-056b-4da0-89db-9dbd386ad722\") " pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.608752 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b3204d-2a1b-4ec7-b05d-5b24344a0dbd" path="/var/lib/kubelet/pods/69b3204d-2a1b-4ec7-b05d-5b24344a0dbd/volumes" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.647689 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkp7m\" (UniqueName: \"kubernetes.io/projected/be2b4140-056b-4da0-89db-9dbd386ad722-kube-api-access-bkp7m\") pod \"crc-debug-zbh7z\" (UID: \"be2b4140-056b-4da0-89db-9dbd386ad722\") " pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.647893 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be2b4140-056b-4da0-89db-9dbd386ad722-host\") pod \"crc-debug-zbh7z\" (UID: \"be2b4140-056b-4da0-89db-9dbd386ad722\") " pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.648086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be2b4140-056b-4da0-89db-9dbd386ad722-host\") pod \"crc-debug-zbh7z\" (UID: \"be2b4140-056b-4da0-89db-9dbd386ad722\") " pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.669596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkp7m\" (UniqueName: \"kubernetes.io/projected/be2b4140-056b-4da0-89db-9dbd386ad722-kube-api-access-bkp7m\") pod \"crc-debug-zbh7z\" (UID: \"be2b4140-056b-4da0-89db-9dbd386ad722\") " pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" Jan 23 19:11:57 crc kubenswrapper[4760]: I0123 19:11:57.803847 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" Jan 23 19:11:57 crc kubenswrapper[4760]: W0123 19:11:57.837295 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe2b4140_056b_4da0_89db_9dbd386ad722.slice/crio-47e681086b95d618cb83829f3a36c476c2cf4642ce038bc3285a2350d20d37c8 WatchSource:0}: Error finding container 47e681086b95d618cb83829f3a36c476c2cf4642ce038bc3285a2350d20d37c8: Status 404 returned error can't find the container with id 47e681086b95d618cb83829f3a36c476c2cf4642ce038bc3285a2350d20d37c8 Jan 23 19:11:58 crc kubenswrapper[4760]: I0123 19:11:58.098294 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" event={"ID":"be2b4140-056b-4da0-89db-9dbd386ad722","Type":"ContainerStarted","Data":"d445425a4cbc326b16385f084f88881aec101f6617998c5622f4299943ce1fde"} Jan 23 19:11:58 crc kubenswrapper[4760]: I0123 19:11:58.098614 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" event={"ID":"be2b4140-056b-4da0-89db-9dbd386ad722","Type":"ContainerStarted","Data":"47e681086b95d618cb83829f3a36c476c2cf4642ce038bc3285a2350d20d37c8"} Jan 23 19:11:58 crc kubenswrapper[4760]: I0123 19:11:58.133981 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hq4g9/crc-debug-zbh7z"] Jan 23 19:11:58 crc kubenswrapper[4760]: I0123 19:11:58.141363 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hq4g9/crc-debug-zbh7z"] Jan 23 19:11:59 crc kubenswrapper[4760]: I0123 19:11:59.108211 4760 generic.go:334] "Generic (PLEG): container finished" podID="be2b4140-056b-4da0-89db-9dbd386ad722" containerID="d445425a4cbc326b16385f084f88881aec101f6617998c5622f4299943ce1fde" exitCode=0 Jan 23 19:11:59 crc kubenswrapper[4760]: I0123 19:11:59.214648 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" Jan 23 19:11:59 crc kubenswrapper[4760]: I0123 19:11:59.384402 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be2b4140-056b-4da0-89db-9dbd386ad722-host\") pod \"be2b4140-056b-4da0-89db-9dbd386ad722\" (UID: \"be2b4140-056b-4da0-89db-9dbd386ad722\") " Jan 23 19:11:59 crc kubenswrapper[4760]: I0123 19:11:59.384818 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkp7m\" (UniqueName: \"kubernetes.io/projected/be2b4140-056b-4da0-89db-9dbd386ad722-kube-api-access-bkp7m\") pod \"be2b4140-056b-4da0-89db-9dbd386ad722\" (UID: \"be2b4140-056b-4da0-89db-9dbd386ad722\") " Jan 23 19:11:59 crc kubenswrapper[4760]: I0123 19:11:59.385082 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be2b4140-056b-4da0-89db-9dbd386ad722-host" (OuterVolumeSpecName: "host") pod "be2b4140-056b-4da0-89db-9dbd386ad722" (UID: "be2b4140-056b-4da0-89db-9dbd386ad722"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:11:59 crc kubenswrapper[4760]: I0123 19:11:59.385350 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be2b4140-056b-4da0-89db-9dbd386ad722-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:11:59 crc kubenswrapper[4760]: I0123 19:11:59.391034 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be2b4140-056b-4da0-89db-9dbd386ad722-kube-api-access-bkp7m" (OuterVolumeSpecName: "kube-api-access-bkp7m") pod "be2b4140-056b-4da0-89db-9dbd386ad722" (UID: "be2b4140-056b-4da0-89db-9dbd386ad722"). InnerVolumeSpecName "kube-api-access-bkp7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:11:59 crc kubenswrapper[4760]: I0123 19:11:59.487978 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkp7m\" (UniqueName: \"kubernetes.io/projected/be2b4140-056b-4da0-89db-9dbd386ad722-kube-api-access-bkp7m\") on node \"crc\" DevicePath \"\"" Jan 23 19:11:59 crc kubenswrapper[4760]: I0123 19:11:59.609244 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be2b4140-056b-4da0-89db-9dbd386ad722" path="/var/lib/kubelet/pods/be2b4140-056b-4da0-89db-9dbd386ad722/volumes" Jan 23 19:12:00 crc kubenswrapper[4760]: I0123 19:12:00.121148 4760 scope.go:117] "RemoveContainer" containerID="d445425a4cbc326b16385f084f88881aec101f6617998c5622f4299943ce1fde" Jan 23 19:12:00 crc kubenswrapper[4760]: I0123 19:12:00.121182 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/crc-debug-zbh7z" Jan 23 19:12:17 crc kubenswrapper[4760]: I0123 19:12:17.755864 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7566k"] Jan 23 19:12:17 crc kubenswrapper[4760]: E0123 19:12:17.760830 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2b4140-056b-4da0-89db-9dbd386ad722" containerName="container-00" Jan 23 19:12:17 crc kubenswrapper[4760]: I0123 19:12:17.760870 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2b4140-056b-4da0-89db-9dbd386ad722" containerName="container-00" Jan 23 19:12:17 crc kubenswrapper[4760]: I0123 19:12:17.761064 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2b4140-056b-4da0-89db-9dbd386ad722" containerName="container-00" Jan 23 19:12:17 crc kubenswrapper[4760]: I0123 19:12:17.762580 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:17 crc kubenswrapper[4760]: I0123 19:12:17.778197 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7566k"] Jan 23 19:12:17 crc kubenswrapper[4760]: I0123 19:12:17.935300 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-utilities\") pod \"certified-operators-7566k\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:17 crc kubenswrapper[4760]: I0123 19:12:17.935380 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-catalog-content\") pod \"certified-operators-7566k\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:17 crc kubenswrapper[4760]: I0123 19:12:17.936027 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5wt4\" (UniqueName: \"kubernetes.io/projected/40cb487e-b946-4609-b574-34be41a1f5e9-kube-api-access-g5wt4\") pod \"certified-operators-7566k\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:18 crc kubenswrapper[4760]: I0123 19:12:18.037952 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5wt4\" (UniqueName: \"kubernetes.io/projected/40cb487e-b946-4609-b574-34be41a1f5e9-kube-api-access-g5wt4\") pod \"certified-operators-7566k\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:18 crc kubenswrapper[4760]: I0123 19:12:18.038102 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-utilities\") pod \"certified-operators-7566k\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:18 crc kubenswrapper[4760]: I0123 19:12:18.038147 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-catalog-content\") pod \"certified-operators-7566k\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:18 crc kubenswrapper[4760]: I0123 19:12:18.038615 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-catalog-content\") pod \"certified-operators-7566k\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:18 crc kubenswrapper[4760]: I0123 19:12:18.038744 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-utilities\") pod \"certified-operators-7566k\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:18 crc kubenswrapper[4760]: I0123 19:12:18.061141 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5wt4\" (UniqueName: \"kubernetes.io/projected/40cb487e-b946-4609-b574-34be41a1f5e9-kube-api-access-g5wt4\") pod \"certified-operators-7566k\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:18 crc kubenswrapper[4760]: I0123 19:12:18.083984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:18 crc kubenswrapper[4760]: I0123 19:12:18.687542 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7566k"] Jan 23 19:12:19 crc kubenswrapper[4760]: I0123 19:12:19.292750 4760 generic.go:334] "Generic (PLEG): container finished" podID="40cb487e-b946-4609-b574-34be41a1f5e9" containerID="2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada" exitCode=0 Jan 23 19:12:19 crc kubenswrapper[4760]: I0123 19:12:19.292828 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7566k" event={"ID":"40cb487e-b946-4609-b574-34be41a1f5e9","Type":"ContainerDied","Data":"2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada"} Jan 23 19:12:19 crc kubenswrapper[4760]: I0123 19:12:19.293081 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7566k" event={"ID":"40cb487e-b946-4609-b574-34be41a1f5e9","Type":"ContainerStarted","Data":"8f0f696f073da0af5fd7d040bff252a118a037e33027260b271683176f76f183"} Jan 23 19:12:19 crc kubenswrapper[4760]: I0123 19:12:19.295335 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:12:21 crc kubenswrapper[4760]: I0123 19:12:21.309851 4760 generic.go:334] "Generic (PLEG): container finished" podID="40cb487e-b946-4609-b574-34be41a1f5e9" containerID="ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741" exitCode=0 Jan 23 19:12:21 crc kubenswrapper[4760]: I0123 19:12:21.309947 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7566k" event={"ID":"40cb487e-b946-4609-b574-34be41a1f5e9","Type":"ContainerDied","Data":"ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741"} Jan 23 19:12:23 crc kubenswrapper[4760]: I0123 19:12:23.333526 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7566k" event={"ID":"40cb487e-b946-4609-b574-34be41a1f5e9","Type":"ContainerStarted","Data":"88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3"} Jan 23 19:12:23 crc kubenswrapper[4760]: I0123 19:12:23.360792 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7566k" podStartSLOduration=3.824672122 podStartE2EDuration="6.360767124s" podCreationTimestamp="2026-01-23 19:12:17 +0000 UTC" firstStartedPulling="2026-01-23 19:12:19.295055131 +0000 UTC m=+4282.297513074" lastFinishedPulling="2026-01-23 19:12:21.831150143 +0000 UTC m=+4284.833608076" observedRunningTime="2026-01-23 19:12:23.350028518 +0000 UTC m=+4286.352486451" watchObservedRunningTime="2026-01-23 19:12:23.360767124 +0000 UTC m=+4286.363225057" Jan 23 19:12:28 crc kubenswrapper[4760]: I0123 19:12:28.084393 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:28 crc kubenswrapper[4760]: I0123 19:12:28.085032 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:28 crc kubenswrapper[4760]: I0123 19:12:28.133334 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:28 crc kubenswrapper[4760]: I0123 19:12:28.429812 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:29 crc kubenswrapper[4760]: I0123 19:12:29.321710 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7566k"] Jan 23 19:12:30 crc kubenswrapper[4760]: I0123 19:12:30.396695 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7566k" podUID="40cb487e-b946-4609-b574-34be41a1f5e9" containerName="registry-server" containerID="cri-o://88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3" gracePeriod=2 Jan 23 19:12:30 crc kubenswrapper[4760]: I0123 19:12:30.921634 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:30 crc kubenswrapper[4760]: I0123 19:12:30.996337 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-utilities\") pod \"40cb487e-b946-4609-b574-34be41a1f5e9\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " Jan 23 19:12:30 crc kubenswrapper[4760]: I0123 19:12:30.996604 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-catalog-content\") pod \"40cb487e-b946-4609-b574-34be41a1f5e9\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " Jan 23 19:12:30 crc kubenswrapper[4760]: I0123 19:12:30.996660 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5wt4\" (UniqueName: \"kubernetes.io/projected/40cb487e-b946-4609-b574-34be41a1f5e9-kube-api-access-g5wt4\") pod \"40cb487e-b946-4609-b574-34be41a1f5e9\" (UID: \"40cb487e-b946-4609-b574-34be41a1f5e9\") " Jan 23 19:12:30 crc kubenswrapper[4760]: I0123 19:12:30.998366 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-utilities" (OuterVolumeSpecName: "utilities") pod "40cb487e-b946-4609-b574-34be41a1f5e9" (UID: "40cb487e-b946-4609-b574-34be41a1f5e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.004615 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40cb487e-b946-4609-b574-34be41a1f5e9-kube-api-access-g5wt4" (OuterVolumeSpecName: "kube-api-access-g5wt4") pod "40cb487e-b946-4609-b574-34be41a1f5e9" (UID: "40cb487e-b946-4609-b574-34be41a1f5e9"). InnerVolumeSpecName "kube-api-access-g5wt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.099235 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5wt4\" (UniqueName: \"kubernetes.io/projected/40cb487e-b946-4609-b574-34be41a1f5e9-kube-api-access-g5wt4\") on node \"crc\" DevicePath \"\"" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.099280 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.408920 4760 generic.go:334] "Generic (PLEG): container finished" podID="40cb487e-b946-4609-b574-34be41a1f5e9" containerID="88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3" exitCode=0 Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.409318 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7566k" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.409352 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7566k" event={"ID":"40cb487e-b946-4609-b574-34be41a1f5e9","Type":"ContainerDied","Data":"88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3"} Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.410808 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7566k" event={"ID":"40cb487e-b946-4609-b574-34be41a1f5e9","Type":"ContainerDied","Data":"8f0f696f073da0af5fd7d040bff252a118a037e33027260b271683176f76f183"} Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.410897 4760 scope.go:117] "RemoveContainer" containerID="88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.430130 4760 scope.go:117] "RemoveContainer" containerID="ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.449929 4760 scope.go:117] "RemoveContainer" containerID="2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.496444 4760 scope.go:117] "RemoveContainer" containerID="88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3" Jan 23 19:12:31 crc kubenswrapper[4760]: E0123 19:12:31.497329 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3\": container with ID starting with 88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3 not found: ID does not exist" containerID="88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.497389 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3"} err="failed to get container status \"88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3\": rpc error: code = NotFound desc = could not find container \"88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3\": container with ID starting with 88adee04985e5ffece998173390f4143e96ab3a0e18284adaa2cfa76484173a3 not found: ID does not exist" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.497436 4760 scope.go:117] "RemoveContainer" containerID="ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741" Jan 23 19:12:31 crc kubenswrapper[4760]: E0123 19:12:31.498124 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741\": container with ID starting with ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741 not found: ID does not exist" containerID="ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.498151 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741"} err="failed to get container status \"ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741\": rpc error: code = NotFound desc = could not find container \"ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741\": container with ID starting with ed5bbcde3c40ce6a828c3ed22dc06b67fbaa305f7b68dd1d7ea4d186a73b0741 not found: ID does not exist" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.498173 4760 scope.go:117] "RemoveContainer" containerID="2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada" Jan 23 19:12:31 crc kubenswrapper[4760]: E0123 19:12:31.501506 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada\": container with ID starting with 2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada not found: ID does not exist" containerID="2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.501564 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada"} err="failed to get container status \"2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada\": rpc error: code = NotFound desc = could not find container \"2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada\": container with ID starting with 2e89c48c21584839510fccc00d689b7c45afa8699841d9457a52c6310aa71ada not found: ID does not exist" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.529057 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-664c7d54bb-bxtwt_78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986/barbican-api/0.log" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.534953 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40cb487e-b946-4609-b574-34be41a1f5e9" (UID: "40cb487e-b946-4609-b574-34be41a1f5e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.612882 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40cb487e-b946-4609-b574-34be41a1f5e9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.735527 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7566k"] Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.745588 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7566k"] Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.790367 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-664c7d54bb-bxtwt_78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986/barbican-api-log/0.log" Jan 23 19:12:31 crc kubenswrapper[4760]: I0123 19:12:31.810725 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86b58c4bfd-xlh2d_f5e3da3f-c7fe-4735-a284-f35a50c46d2b/barbican-keystone-listener/0.log" Jan 23 19:12:32 crc kubenswrapper[4760]: I0123 19:12:32.074466 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-696f8c69dc-tcwp6_e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5/barbican-worker/0.log" Jan 23 19:12:32 crc kubenswrapper[4760]: I0123 19:12:32.103274 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-696f8c69dc-tcwp6_e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5/barbican-worker-log/0.log" Jan 23 19:12:32 crc kubenswrapper[4760]: I0123 19:12:32.321175 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86b58c4bfd-xlh2d_f5e3da3f-c7fe-4735-a284-f35a50c46d2b/barbican-keystone-listener-log/0.log" Jan 23 19:12:32 crc kubenswrapper[4760]: I0123 19:12:32.361624 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2_ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:32 crc kubenswrapper[4760]: I0123 19:12:32.978984 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f870c7cc-bcbe-4101-9a86-8a190e20cef2/ceilometer-central-agent/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.060377 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f870c7cc-bcbe-4101-9a86-8a190e20cef2/proxy-httpd/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.073801 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f870c7cc-bcbe-4101-9a86-8a190e20cef2/ceilometer-notification-agent/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.096801 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f870c7cc-bcbe-4101-9a86-8a190e20cef2/sg-core/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.213520 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72_9d2d6784-f9bf-48c1-95a9-0ba4167059a6/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.281591 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg_6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.448876 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f611f27e-a46b-40f8-ad28-a32d1dfa1149/cinder-api/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.541060 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f611f27e-a46b-40f8-ad28-a32d1dfa1149/cinder-api-log/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.611764 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40cb487e-b946-4609-b574-34be41a1f5e9" path="/var/lib/kubelet/pods/40cb487e-b946-4609-b574-34be41a1f5e9/volumes" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.683891 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_34fe3286-04be-40d9-a398-86c54b9025f1/probe/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.753354 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_34fe3286-04be-40d9-a398-86c54b9025f1/cinder-backup/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.874838 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b3b1a53e-ed5e-43c1-aa57-e0e829359103/cinder-scheduler/0.log" Jan 23 19:12:33 crc kubenswrapper[4760]: I0123 19:12:33.922256 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b3b1a53e-ed5e-43c1-aa57-e0e829359103/probe/0.log" Jan 23 19:12:34 crc kubenswrapper[4760]: I0123 19:12:34.079079 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_dd897463-2b70-4dcd-9a51-442771b77ff4/cinder-volume/0.log" Jan 23 19:12:34 crc kubenswrapper[4760]: I0123 19:12:34.086360 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_dd897463-2b70-4dcd-9a51-442771b77ff4/probe/0.log" Jan 23 19:12:34 crc kubenswrapper[4760]: I0123 19:12:34.811247 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds_59cea3b6-c76e-4cca-9e9f-15bdeab71c63/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:34 crc kubenswrapper[4760]: I0123 19:12:34.849075 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5_74d60425-0689-4af1-b745-22453031dcfe/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:34 crc kubenswrapper[4760]: I0123 19:12:34.983926 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-px9rg_a52c0919-287b-48b2-83f6-9bc4fb33eaa6/init/0.log" Jan 23 19:12:35 crc kubenswrapper[4760]: I0123 19:12:35.198987 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-px9rg_a52c0919-287b-48b2-83f6-9bc4fb33eaa6/init/0.log" Jan 23 19:12:35 crc kubenswrapper[4760]: I0123 19:12:35.230798 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-px9rg_a52c0919-287b-48b2-83f6-9bc4fb33eaa6/dnsmasq-dns/0.log" Jan 23 19:12:35 crc kubenswrapper[4760]: I0123 19:12:35.256143 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_058e59c7-9277-4925-810f-105817254775/glance-httpd/0.log" Jan 23 19:12:35 crc kubenswrapper[4760]: I0123 19:12:35.440372 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_058e59c7-9277-4925-810f-105817254775/glance-log/0.log" Jan 23 19:12:35 crc kubenswrapper[4760]: I0123 19:12:35.490805 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9/glance-httpd/0.log" Jan 23 19:12:35 crc kubenswrapper[4760]: I0123 19:12:35.533957 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9/glance-log/0.log" Jan 23 19:12:35 crc kubenswrapper[4760]: I0123 19:12:35.739286 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-559467fcc6-pxz2z_fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6/horizon/0.log" Jan 23 19:12:35 crc kubenswrapper[4760]: I0123 19:12:35.793009 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk_826eb339-b444-455e-b66e-f0e3fa00753d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:35 crc kubenswrapper[4760]: I0123 19:12:35.929466 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-559467fcc6-pxz2z_fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6/horizon-log/0.log" Jan 23 19:12:36 crc kubenswrapper[4760]: I0123 19:12:36.125894 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-m9jvz_268fb02f-f216-4953-9868-e7b1d27448f2/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:36 crc kubenswrapper[4760]: I0123 19:12:36.273270 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486581-qdnbh_f163dd36-7bd7-4821-9874-8eb534d2c03d/keystone-cron/0.log" Jan 23 19:12:36 crc kubenswrapper[4760]: I0123 19:12:36.346575 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2ae37337-d347-4c76-ab83-43463ab30c29/kube-state-metrics/0.log" Jan 23 19:12:36 crc kubenswrapper[4760]: I0123 19:12:36.622394 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-md5ql_538bf016-5ed3-44cd-bcf0-f59c56e01048/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:36 crc kubenswrapper[4760]: I0123 19:12:36.920181 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74944c68c4-mnfbr_5201f9c2-1e25-4192-8bd6-2e0fb4a5b902/keystone-api/0.log" Jan 23 19:12:36 crc kubenswrapper[4760]: I0123 19:12:36.976423 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0c64c390-9956-4595-b1b9-9bf78be32e68/manila-api/0.log" Jan 23 19:12:37 crc kubenswrapper[4760]: I0123 19:12:37.048392 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d8d48def-f1d3-47de-9724-65f5d7f0d47a/probe/0.log" Jan 23 19:12:37 crc kubenswrapper[4760]: I0123 19:12:37.095170 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d8d48def-f1d3-47de-9724-65f5d7f0d47a/manila-scheduler/0.log" Jan 23 19:12:37 crc kubenswrapper[4760]: I0123 19:12:37.258651 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_412f9ad2-6b73-4af0-bd6e-66a697eb20ba/probe/0.log" Jan 23 19:12:37 crc kubenswrapper[4760]: I0123 19:12:37.533202 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0c64c390-9956-4595-b1b9-9bf78be32e68/manila-api-log/0.log" Jan 23 19:12:37 crc kubenswrapper[4760]: I0123 19:12:37.915580 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx_836b1ef8-b075-4321-9f13-18120bc8d010/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:37 crc kubenswrapper[4760]: I0123 19:12:37.933713 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-585856f577-q8bpp_789429a2-8a44-4914-b54c-65e7ccaa180c/neutron-httpd/0.log" Jan 23 19:12:38 crc kubenswrapper[4760]: I0123 19:12:38.024122 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-585856f577-q8bpp_789429a2-8a44-4914-b54c-65e7ccaa180c/neutron-api/0.log" Jan 23 19:12:38 crc kubenswrapper[4760]: I0123 19:12:38.268368 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_412f9ad2-6b73-4af0-bd6e-66a697eb20ba/manila-share/0.log" Jan 23 19:12:38 crc kubenswrapper[4760]: I0123 19:12:38.380399 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ddef27ef-a5e0-4045-9024-6710d89f194a/nova-api-log/0.log" Jan 23 19:12:38 crc kubenswrapper[4760]: I0123 19:12:38.580759 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ddef27ef-a5e0-4045-9024-6710d89f194a/nova-api-api/0.log" Jan 23 19:12:38 crc kubenswrapper[4760]: I0123 19:12:38.628336 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_12e7c803-ac52-4cb2-b29e-14973d73c522/nova-cell0-conductor-conductor/0.log" Jan 23 19:12:38 crc kubenswrapper[4760]: I0123 19:12:38.725955 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_678d1978-a829-44dd-9030-3026a9f170b0/nova-cell1-conductor-conductor/0.log" Jan 23 19:12:38 crc kubenswrapper[4760]: I0123 19:12:38.895537 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6b36e62e-e56c-4619-a885-dc26d824c2ed/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 19:12:39 crc kubenswrapper[4760]: I0123 19:12:39.022078 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44_a7f5467d-783f-4be7-a149-ea8b97bcf468/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:39 crc kubenswrapper[4760]: I0123 19:12:39.379689 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e938088-0a8b-43ef-8e83-e752649de48d/nova-metadata-log/0.log" Jan 23 19:12:39 crc kubenswrapper[4760]: I0123 19:12:39.494753 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ed55273b-51cf-490f-80ec-003cd28fa749/nova-scheduler-scheduler/0.log" Jan 23 19:12:39 crc kubenswrapper[4760]: I0123 19:12:39.610498 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c41fcdb0-57f0-4045-948f-16e9f075ae61/mysql-bootstrap/0.log" Jan 23 19:12:39 crc kubenswrapper[4760]: I0123 19:12:39.837023 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c41fcdb0-57f0-4045-948f-16e9f075ae61/galera/0.log" Jan 23 19:12:39 crc kubenswrapper[4760]: I0123 19:12:39.838696 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c41fcdb0-57f0-4045-948f-16e9f075ae61/mysql-bootstrap/0.log" Jan 23 19:12:40 crc kubenswrapper[4760]: I0123 19:12:40.074522 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad/mysql-bootstrap/0.log" Jan 23 19:12:40 crc kubenswrapper[4760]: I0123 19:12:40.238941 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad/mysql-bootstrap/0.log" Jan 23 19:12:40 crc kubenswrapper[4760]: I0123 19:12:40.257200 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad/galera/0.log" Jan 23 19:12:40 crc kubenswrapper[4760]: I0123 19:12:40.425313 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b41e1f55-3448-4112-8aca-c5c2d6018310/openstackclient/0.log" Jan 23 19:12:40 crc kubenswrapper[4760]: I0123 19:12:40.528903 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2wpph_ea0533e4-88c1-4a03-93a9-f0680acaafc5/ovn-controller/0.log" Jan 23 19:12:40 crc kubenswrapper[4760]: I0123 19:12:40.729585 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e938088-0a8b-43ef-8e83-e752649de48d/nova-metadata-metadata/0.log" Jan 23 19:12:40 crc kubenswrapper[4760]: I0123 19:12:40.739664 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dc25n_b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212/openstack-network-exporter/0.log" Jan 23 19:12:40 crc kubenswrapper[4760]: I0123 19:12:40.916274 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lf5j_eef24537-281d-489c-b15b-5610cfc62b32/ovsdb-server-init/0.log" Jan 23 19:12:41 crc kubenswrapper[4760]: I0123 19:12:41.144900 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lf5j_eef24537-281d-489c-b15b-5610cfc62b32/ovsdb-server/0.log" Jan 23 19:12:41 crc kubenswrapper[4760]: I0123 19:12:41.177889 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lf5j_eef24537-281d-489c-b15b-5610cfc62b32/ovs-vswitchd/0.log" Jan 23 19:12:41 crc kubenswrapper[4760]: I0123 19:12:41.250967 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lf5j_eef24537-281d-489c-b15b-5610cfc62b32/ovsdb-server-init/0.log" Jan 23 19:12:41 crc kubenswrapper[4760]: I0123 19:12:41.466462 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xpd9x_12785b41-cc5b-4404-ac5d-42b24f3046b4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:41 crc kubenswrapper[4760]: I0123 19:12:41.492430 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_519052b1-de37-42f3-8811-9252e225ad9b/openstack-network-exporter/0.log" Jan 23 19:12:41 crc kubenswrapper[4760]: I0123 19:12:41.535442 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_519052b1-de37-42f3-8811-9252e225ad9b/ovn-northd/0.log" Jan 23 19:12:41 crc kubenswrapper[4760]: I0123 19:12:41.669042 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c82ab7b9-010c-49aa-b6cc-a654dad56b87/openstack-network-exporter/0.log" Jan 23 19:12:41 crc kubenswrapper[4760]: I0123 19:12:41.726014 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c82ab7b9-010c-49aa-b6cc-a654dad56b87/ovsdbserver-nb/0.log" Jan 23 19:12:41 crc kubenswrapper[4760]: I0123 19:12:41.880104 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7350c936-08ea-4b64-ae16-a0a7c3241c52/openstack-network-exporter/0.log" Jan 23 19:12:41 crc kubenswrapper[4760]: I0123 19:12:41.952127 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7350c936-08ea-4b64-ae16-a0a7c3241c52/ovsdbserver-sb/0.log" Jan 23 19:12:42 crc kubenswrapper[4760]: I0123 19:12:42.054445 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-66645c546d-bcr2r_f40c8acd-aef7-4575-bf9b-18a4e220b34b/placement-api/0.log" Jan 23 19:12:42 crc kubenswrapper[4760]: I0123 19:12:42.126350 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-66645c546d-bcr2r_f40c8acd-aef7-4575-bf9b-18a4e220b34b/placement-log/0.log" Jan 23 19:12:42 crc kubenswrapper[4760]: I0123 19:12:42.598440 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_108fc09d-d5b2-41bc-b2dd-f2edb1847366/setup-container/0.log" Jan 23 19:12:42 crc kubenswrapper[4760]: I0123 19:12:42.824305 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_108fc09d-d5b2-41bc-b2dd-f2edb1847366/rabbitmq/0.log" Jan 23 19:12:42 crc kubenswrapper[4760]: I0123 19:12:42.831588 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1c1aa6a7-0392-4091-b65f-69e5e224288c/setup-container/0.log" Jan 23 19:12:42 crc kubenswrapper[4760]: I0123 19:12:42.855395 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_108fc09d-d5b2-41bc-b2dd-f2edb1847366/setup-container/0.log" Jan 23 19:12:43 crc kubenswrapper[4760]: I0123 19:12:43.140814 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1c1aa6a7-0392-4091-b65f-69e5e224288c/setup-container/0.log" Jan 23 19:12:43 crc kubenswrapper[4760]: I0123 19:12:43.190152 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw_85969cec-7a43-4aed-9ec1-522308d222a1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:43 crc kubenswrapper[4760]: I0123 19:12:43.215447 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1c1aa6a7-0392-4091-b65f-69e5e224288c/rabbitmq/0.log" Jan 23 19:12:43 crc kubenswrapper[4760]: I0123 19:12:43.395059 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6_c6af47ac-285d-4ee9-8ab6-1aa3d98d3927/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:43 crc kubenswrapper[4760]: I0123 19:12:43.699497 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-wgxkm_b5489ba9-2339-49ff-b4b1-5ac088f89e85/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:12:43 crc kubenswrapper[4760]: I0123 19:12:43.785269 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pr2nd_05548e64-64a8-42d3-8611-6b10492801d6/ssh-known-hosts-edpm-deployment/0.log" Jan 23 19:12:44 crc kubenswrapper[4760]: I0123 19:12:44.432030 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_731cb86c-5b1c-4f47-843a-bd70bc4656d3/test-operator-logs-container/0.log" Jan 23 19:12:44 crc kubenswrapper[4760]: I0123 19:12:44.434361 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6b58d560-8084-471f-a385-c36ce2d28bd8/tempest-tests-tempest-tests-runner/0.log" Jan 23 19:12:44 crc kubenswrapper[4760]: I0123 19:12:44.609938 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6_4824cd7d-8d66-48ac-bf98-f7f4ee516458/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:13:00 crc kubenswrapper[4760]: I0123 19:13:00.830020 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0db24b5a-b078-42ca-b3ef-4abf3cf33531/memcached/0.log" Jan 23 19:13:12 crc kubenswrapper[4760]: I0123 19:13:12.812757 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/util/0.log" Jan 23 19:13:12 crc kubenswrapper[4760]: I0123 19:13:12.863482 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-5g86q_3648750a-24fe-4391-8921-66d791485e98/manager/0.log" Jan 23 19:13:12 crc kubenswrapper[4760]: I0123 19:13:12.999062 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/pull/0.log" Jan 23 19:13:13 crc kubenswrapper[4760]: I0123 19:13:13.036959 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/pull/0.log" Jan 23 19:13:13 crc kubenswrapper[4760]: I0123 19:13:13.043822 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/util/0.log" Jan 23 19:13:13 crc kubenswrapper[4760]: I0123 19:13:13.214277 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/util/0.log" Jan 23 19:13:13 crc kubenswrapper[4760]: I0123 19:13:13.231941 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/extract/0.log" Jan 23 19:13:13 crc kubenswrapper[4760]: I0123 19:13:13.251084 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/pull/0.log" Jan 23 19:13:13 crc kubenswrapper[4760]: I0123 19:13:13.519798 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-cql54_fdb3af86-9ecd-45de-8f76-976ff884b581/manager/0.log" Jan 23 19:13:13 crc kubenswrapper[4760]: I0123 19:13:13.546123 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-j9vqr_d0989ccd-5163-46a0-b578-975ba1c31f03/manager/0.log" Jan 23 19:13:13 crc kubenswrapper[4760]: I0123 19:13:13.703025 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-pjz6t_ef671ec0-a50e-4acd-bd63-31aa36cf3033/manager/0.log" Jan 23 19:13:13 crc kubenswrapper[4760]: I0123 19:13:13.721615 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-ljpl4_0d449643-d693-4591-a0d6-42e8129a3468/manager/0.log" Jan 23 19:13:13 crc kubenswrapper[4760]: I0123 19:13:13.917909 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-8w8lt_fcc9617c-e7aa-4707-bcaf-1492e3e0fee6/manager/0.log" Jan 23 19:13:14 crc kubenswrapper[4760]: I0123 19:13:14.188222 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-vvcd8_58df2b6d-bc85-4266-bc2c-143cd52efc28/manager/0.log" Jan 23 19:13:14 crc kubenswrapper[4760]: I0123 19:13:14.223851 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-cxqww_f56403a2-dc6e-4362-99c2-669531fd3d8d/manager/0.log" Jan 23 19:13:14 crc kubenswrapper[4760]: I0123 19:13:14.262538 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-qf78h_4d84645c-b378-4acd-a3e5-638c61a3b709/manager/0.log" Jan 23 19:13:14 crc kubenswrapper[4760]: I0123 19:13:14.442328 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7758cc4469-bczdt_8a1115aa-5fc1-4dc1-8752-7d15f984837b/manager/0.log" Jan 23 19:13:14 crc kubenswrapper[4760]: I0123 19:13:14.470838 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf_285e41c1-c4f8-4978-9a78-ca8d88b45f29/manager/0.log" Jan 23 19:13:14 crc kubenswrapper[4760]: I0123 19:13:14.679115 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-jpt62_b96abc36-760b-4dfb-bc01-80872c59c059/manager/0.log" Jan 23 19:13:14 crc kubenswrapper[4760]: I0123 19:13:14.741940 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-jq9l4_587a0d90-d644-4501-bc83-ef454dc4b3d9/manager/0.log" Jan 23 19:13:14 crc kubenswrapper[4760]: I0123 19:13:14.852603 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-ngqsw_89d52854-e7b7-4eba-b990-49a971674ab5/manager/0.log" Jan 23 19:13:14 crc kubenswrapper[4760]: I0123 19:13:14.930101 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm_ed6619a3-ea05-44ae-880e-c9ba87fb93f9/manager/0.log" Jan 23 19:13:15 crc kubenswrapper[4760]: I0123 19:13:15.261349 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-ff567b4f8-wdx4z_e9a6d033-9989-4ea2-a4c1-734f2baa1828/operator/0.log" Jan 23 19:13:15 crc kubenswrapper[4760]: I0123 19:13:15.379008 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4cxm9_572066a4-717d-4bc0-8ef4-146bd33c3768/registry-server/0.log" Jan 23 19:13:15 crc kubenswrapper[4760]: I0123 19:13:15.640238 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-tbdng_bb6317fe-84f6-4921-9286-6b1aadd6d038/manager/0.log" Jan 23 19:13:15 crc kubenswrapper[4760]: I0123 19:13:15.844456 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-lzv66_0dab320d-061f-43f2-9e57-1c94b958522a/manager/0.log" Jan 23 19:13:15 crc kubenswrapper[4760]: I0123 19:13:15.958472 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-j24jt_ec704d93-0ca4-4d63-a123-dbb5a62bffed/operator/0.log" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.212556 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-kmclp_88c1fb15-33fa-40cf-afa9-068d281bbed5/manager/0.log" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.489376 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-7bc5b_9d6049ab-b6ff-41e2-8e37-f3c2102d5ab0/manager/0.log" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.616492 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-nxws7_78a244f9-feb4-4df5-b5ec-7bb09185e655/manager/0.log" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.666363 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h5pl8"] Jan 23 19:13:16 crc kubenswrapper[4760]: E0123 19:13:16.667228 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cb487e-b946-4609-b574-34be41a1f5e9" containerName="extract-content" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.667250 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cb487e-b946-4609-b574-34be41a1f5e9" containerName="extract-content" Jan 23 19:13:16 crc kubenswrapper[4760]: E0123 19:13:16.667262 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cb487e-b946-4609-b574-34be41a1f5e9" containerName="extract-utilities" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.667269 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cb487e-b946-4609-b574-34be41a1f5e9" containerName="extract-utilities" Jan 23 19:13:16 crc kubenswrapper[4760]: E0123 19:13:16.667310 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cb487e-b946-4609-b574-34be41a1f5e9" containerName="registry-server" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.667319 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cb487e-b946-4609-b574-34be41a1f5e9" containerName="registry-server" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.667507 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="40cb487e-b946-4609-b574-34be41a1f5e9" containerName="registry-server" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.673145 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.676813 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5pl8"] Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.731539 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7555664f8b-7kpfz_f5c3fafa-733d-4719-89f5-afd3c885919e/manager/0.log" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.757485 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-r2pws_807135e7-1ace-4928-be9b-82b8a58464fe/manager/0.log" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.782109 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-utilities\") pod \"community-operators-h5pl8\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.782224 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rwjh\" (UniqueName: \"kubernetes.io/projected/206d58d6-18e5-4e8a-b485-01858f605b88-kube-api-access-4rwjh\") pod \"community-operators-h5pl8\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.782301 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-catalog-content\") pod \"community-operators-h5pl8\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.883615 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-utilities\") pod \"community-operators-h5pl8\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.883718 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rwjh\" (UniqueName: \"kubernetes.io/projected/206d58d6-18e5-4e8a-b485-01858f605b88-kube-api-access-4rwjh\") pod \"community-operators-h5pl8\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.883774 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-catalog-content\") pod \"community-operators-h5pl8\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.884267 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-utilities\") pod \"community-operators-h5pl8\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.884328 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-catalog-content\") pod \"community-operators-h5pl8\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.903136 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rwjh\" (UniqueName: \"kubernetes.io/projected/206d58d6-18e5-4e8a-b485-01858f605b88-kube-api-access-4rwjh\") pod \"community-operators-h5pl8\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:16 crc kubenswrapper[4760]: I0123 19:13:16.997237 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:17 crc kubenswrapper[4760]: I0123 19:13:17.556783 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5pl8"] Jan 23 19:13:17 crc kubenswrapper[4760]: W0123 19:13:17.566615 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod206d58d6_18e5_4e8a_b485_01858f605b88.slice/crio-47034a985480bcabe8c68b561329f274401fc9b18019366d9832d63e78347855 WatchSource:0}: Error finding container 47034a985480bcabe8c68b561329f274401fc9b18019366d9832d63e78347855: Status 404 returned error can't find the container with id 47034a985480bcabe8c68b561329f274401fc9b18019366d9832d63e78347855 Jan 23 19:13:17 crc kubenswrapper[4760]: I0123 19:13:17.790148 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5pl8" event={"ID":"206d58d6-18e5-4e8a-b485-01858f605b88","Type":"ContainerStarted","Data":"47034a985480bcabe8c68b561329f274401fc9b18019366d9832d63e78347855"} Jan 23 19:13:18 crc kubenswrapper[4760]: I0123 19:13:18.800093 4760 generic.go:334] "Generic (PLEG): container finished" podID="206d58d6-18e5-4e8a-b485-01858f605b88" containerID="544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028" exitCode=0 Jan 23 19:13:18 crc kubenswrapper[4760]: I0123 19:13:18.800181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5pl8" event={"ID":"206d58d6-18e5-4e8a-b485-01858f605b88","Type":"ContainerDied","Data":"544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028"} Jan 23 19:13:20 crc kubenswrapper[4760]: I0123 19:13:20.818751 4760 generic.go:334] "Generic (PLEG): container finished" podID="206d58d6-18e5-4e8a-b485-01858f605b88" containerID="a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c" exitCode=0 Jan 23 19:13:20 crc kubenswrapper[4760]: I0123 19:13:20.818851 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5pl8" event={"ID":"206d58d6-18e5-4e8a-b485-01858f605b88","Type":"ContainerDied","Data":"a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c"} Jan 23 19:13:21 crc kubenswrapper[4760]: I0123 19:13:21.842473 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5pl8" event={"ID":"206d58d6-18e5-4e8a-b485-01858f605b88","Type":"ContainerStarted","Data":"26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952"} Jan 23 19:13:21 crc kubenswrapper[4760]: I0123 19:13:21.864692 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h5pl8" podStartSLOduration=3.430714646 podStartE2EDuration="5.864675482s" podCreationTimestamp="2026-01-23 19:13:16 +0000 UTC" firstStartedPulling="2026-01-23 19:13:18.802442387 +0000 UTC m=+4341.804900320" lastFinishedPulling="2026-01-23 19:13:21.236403223 +0000 UTC m=+4344.238861156" observedRunningTime="2026-01-23 19:13:21.863343746 +0000 UTC m=+4344.865801679" watchObservedRunningTime="2026-01-23 19:13:21.864675482 +0000 UTC m=+4344.867133415" Jan 23 19:13:26 crc kubenswrapper[4760]: I0123 19:13:26.998022 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:26 crc kubenswrapper[4760]: I0123 19:13:26.999443 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:27 crc kubenswrapper[4760]: I0123 19:13:27.055686 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:27 crc kubenswrapper[4760]: I0123 19:13:27.941761 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:27 crc kubenswrapper[4760]: I0123 19:13:27.995880 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5pl8"] Jan 23 19:13:29 crc kubenswrapper[4760]: I0123 19:13:29.932556 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h5pl8" podUID="206d58d6-18e5-4e8a-b485-01858f605b88" containerName="registry-server" containerID="cri-o://26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952" gracePeriod=2 Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.774627 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.865015 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-catalog-content\") pod \"206d58d6-18e5-4e8a-b485-01858f605b88\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.865297 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-utilities\") pod \"206d58d6-18e5-4e8a-b485-01858f605b88\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.865445 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rwjh\" (UniqueName: \"kubernetes.io/projected/206d58d6-18e5-4e8a-b485-01858f605b88-kube-api-access-4rwjh\") pod \"206d58d6-18e5-4e8a-b485-01858f605b88\" (UID: \"206d58d6-18e5-4e8a-b485-01858f605b88\") " Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.866395 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-utilities" (OuterVolumeSpecName: "utilities") pod "206d58d6-18e5-4e8a-b485-01858f605b88" (UID: "206d58d6-18e5-4e8a-b485-01858f605b88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.875934 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206d58d6-18e5-4e8a-b485-01858f605b88-kube-api-access-4rwjh" (OuterVolumeSpecName: "kube-api-access-4rwjh") pod "206d58d6-18e5-4e8a-b485-01858f605b88" (UID: "206d58d6-18e5-4e8a-b485-01858f605b88"). InnerVolumeSpecName "kube-api-access-4rwjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.929286 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "206d58d6-18e5-4e8a-b485-01858f605b88" (UID: "206d58d6-18e5-4e8a-b485-01858f605b88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.944424 4760 generic.go:334] "Generic (PLEG): container finished" podID="206d58d6-18e5-4e8a-b485-01858f605b88" containerID="26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952" exitCode=0 Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.944673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5pl8" event={"ID":"206d58d6-18e5-4e8a-b485-01858f605b88","Type":"ContainerDied","Data":"26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952"} Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.945613 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5pl8" event={"ID":"206d58d6-18e5-4e8a-b485-01858f605b88","Type":"ContainerDied","Data":"47034a985480bcabe8c68b561329f274401fc9b18019366d9832d63e78347855"} Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.945714 4760 scope.go:117] "RemoveContainer" containerID="26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952" Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.944772 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5pl8" Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.964182 4760 scope.go:117] "RemoveContainer" containerID="a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c" Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.968591 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rwjh\" (UniqueName: \"kubernetes.io/projected/206d58d6-18e5-4e8a-b485-01858f605b88-kube-api-access-4rwjh\") on node \"crc\" DevicePath \"\"" Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.968626 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.968671 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206d58d6-18e5-4e8a-b485-01858f605b88-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.986374 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5pl8"] Jan 23 19:13:30 crc kubenswrapper[4760]: I0123 19:13:30.997926 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h5pl8"] Jan 23 19:13:31 crc kubenswrapper[4760]: I0123 19:13:31.000395 4760 scope.go:117] "RemoveContainer" containerID="544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028" Jan 23 19:13:31 crc kubenswrapper[4760]: I0123 19:13:31.033960 4760 scope.go:117] "RemoveContainer" containerID="26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952" Jan 23 19:13:31 crc kubenswrapper[4760]: E0123 19:13:31.034628 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952\": container with ID starting with 26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952 not found: ID does not exist" containerID="26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952" Jan 23 19:13:31 crc kubenswrapper[4760]: I0123 19:13:31.034661 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952"} err="failed to get container status \"26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952\": rpc error: code = NotFound desc = could not find container \"26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952\": container with ID starting with 26a81d155e31ae95f01523f84447efc64f6fce58f36d2774506ffbb30c193952 not found: ID does not exist" Jan 23 19:13:31 crc kubenswrapper[4760]: I0123 19:13:31.034689 4760 scope.go:117] "RemoveContainer" containerID="a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c" Jan 23 19:13:31 crc kubenswrapper[4760]: E0123 19:13:31.034947 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c\": container with ID starting with a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c not found: ID does not exist" containerID="a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c" Jan 23 19:13:31 crc kubenswrapper[4760]: I0123 19:13:31.034969 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c"} err="failed to get container status \"a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c\": rpc error: code = NotFound desc = could not find container \"a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c\": container with ID starting with a20147b80e3ad2648140568b3606ae8057f569d159c4a86b58ce4cc2a210505c not found: ID does not exist" Jan 23 19:13:31 crc kubenswrapper[4760]: I0123 19:13:31.034985 4760 scope.go:117] "RemoveContainer" containerID="544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028" Jan 23 19:13:31 crc kubenswrapper[4760]: E0123 19:13:31.035310 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028\": container with ID starting with 544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028 not found: ID does not exist" containerID="544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028" Jan 23 19:13:31 crc kubenswrapper[4760]: I0123 19:13:31.035330 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028"} err="failed to get container status \"544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028\": rpc error: code = NotFound desc = could not find container \"544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028\": container with ID starting with 544b2d63bb40c288012c3351658911f1a2384a41cd24b2083d1119cd1a995028 not found: ID does not exist" Jan 23 19:13:31 crc kubenswrapper[4760]: I0123 19:13:31.605625 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206d58d6-18e5-4e8a-b485-01858f605b88" path="/var/lib/kubelet/pods/206d58d6-18e5-4e8a-b485-01858f605b88/volumes" Jan 23 19:13:37 crc kubenswrapper[4760]: I0123 19:13:37.316011 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ffvsn_82314a42-a08b-4561-b24a-71e715d5d37f/control-plane-machine-set-operator/0.log" Jan 23 19:13:37 crc kubenswrapper[4760]: I0123 19:13:37.487971 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qkh24_c08fd2c2-1700-4296-a369-62c3c9928a63/kube-rbac-proxy/0.log" Jan 23 19:13:37 crc kubenswrapper[4760]: I0123 19:13:37.564831 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qkh24_c08fd2c2-1700-4296-a369-62c3c9928a63/machine-api-operator/0.log" Jan 23 19:13:46 crc kubenswrapper[4760]: I0123 19:13:46.075631 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:13:46 crc kubenswrapper[4760]: I0123 19:13:46.076549 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:13:52 crc kubenswrapper[4760]: I0123 19:13:52.342665 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-pnwww_9ef58368-3cff-49ff-8dfd-17ae3ff9e710/cert-manager-controller/0.log" Jan 23 19:13:53 crc kubenswrapper[4760]: I0123 19:13:53.048969 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-t58c6_0a7dda60-1788-4458-b1c4-fa4ecfd723a2/cert-manager-cainjector/0.log" Jan 23 19:13:53 crc kubenswrapper[4760]: I0123 19:13:53.207242 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-f4cdr_8eed5962-6318-49b8-82a5-7f10b629d81c/cert-manager-webhook/0.log" Jan 23 19:14:06 crc kubenswrapper[4760]: I0123 19:14:06.667648 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-qlk28_45b6025c-fe75-4723-8f72-7ef9d4414827/nmstate-console-plugin/0.log" Jan 23 19:14:06 crc kubenswrapper[4760]: I0123 19:14:06.915690 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-j7k9r_0627471a-680e-425a-a2de-e4e8d1b4e956/nmstate-handler/0.log" Jan 23 19:14:07 crc kubenswrapper[4760]: I0123 19:14:07.008538 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rj872_02fddd4e-8b61-4c07-b08d-f8ab8a2799ba/kube-rbac-proxy/0.log" Jan 23 19:14:07 crc kubenswrapper[4760]: I0123 19:14:07.064987 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rj872_02fddd4e-8b61-4c07-b08d-f8ab8a2799ba/nmstate-metrics/0.log" Jan 23 19:14:07 crc kubenswrapper[4760]: I0123 19:14:07.102346 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-cnl5v_7cffdf90-9546-4761-bb9d-c4c6da9dffa7/nmstate-operator/0.log" Jan 23 19:14:07 crc kubenswrapper[4760]: I0123 19:14:07.275520 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-rbsvp_5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea/nmstate-webhook/0.log" Jan 23 19:14:16 crc kubenswrapper[4760]: I0123 19:14:16.075373 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:14:16 crc kubenswrapper[4760]: I0123 19:14:16.075951 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.017429 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5pm7w_d77acbc5-a14f-4002-ac5d-f6c90f44faf6/kube-rbac-proxy/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.233676 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5pm7w_d77acbc5-a14f-4002-ac5d-f6c90f44faf6/controller/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.287133 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-frr-files/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.430551 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-reloader/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.461516 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-metrics/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.482930 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-reloader/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.483718 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-frr-files/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.689521 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-reloader/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.727482 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-metrics/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.728073 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-frr-files/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.745260 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-metrics/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.918751 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-reloader/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.924078 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-metrics/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.925551 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-frr-files/0.log" Jan 23 19:14:37 crc kubenswrapper[4760]: I0123 19:14:37.968350 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/controller/0.log" Jan 23 19:14:38 crc kubenswrapper[4760]: I0123 19:14:38.110449 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/frr-metrics/0.log" Jan 23 19:14:38 crc kubenswrapper[4760]: I0123 19:14:38.272700 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/kube-rbac-proxy/0.log" Jan 23 19:14:38 crc kubenswrapper[4760]: I0123 19:14:38.331794 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/kube-rbac-proxy-frr/0.log" Jan 23 19:14:38 crc kubenswrapper[4760]: I0123 19:14:38.345597 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/reloader/0.log" Jan 23 19:14:38 crc kubenswrapper[4760]: I0123 19:14:38.575023 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-lzmgt_bdfea4ed-b515-4324-a846-11743d9ae4ab/frr-k8s-webhook-server/0.log" Jan 23 19:14:38 crc kubenswrapper[4760]: I0123 19:14:38.825050 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-b8f8768df-khhb6_dc02afcf-f520-4bdf-a8ae-52c2a9c1857e/webhook-server/0.log" Jan 23 19:14:38 crc kubenswrapper[4760]: I0123 19:14:38.896289 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5cd7b57f9b-wsfcj_21e5a15c-db54-43f3-8dd9-834d4a327edd/manager/0.log" Jan 23 19:14:39 crc kubenswrapper[4760]: I0123 19:14:39.202610 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vcqvw_c57f00f5-beec-44ff-b9cc-83ed33ddc502/kube-rbac-proxy/0.log" Jan 23 19:14:39 crc kubenswrapper[4760]: I0123 19:14:39.695589 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vcqvw_c57f00f5-beec-44ff-b9cc-83ed33ddc502/speaker/0.log" Jan 23 19:14:39 crc kubenswrapper[4760]: I0123 19:14:39.771909 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/frr/0.log" Jan 23 19:14:46 crc kubenswrapper[4760]: I0123 19:14:46.075171 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:14:46 crc kubenswrapper[4760]: I0123 19:14:46.075663 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:14:46 crc kubenswrapper[4760]: I0123 19:14:46.075706 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 19:14:46 crc kubenswrapper[4760]: I0123 19:14:46.076389 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:14:46 crc kubenswrapper[4760]: I0123 19:14:46.076446 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" gracePeriod=600 Jan 23 19:14:48 crc kubenswrapper[4760]: I0123 19:14:48.599872 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" exitCode=0 Jan 23 19:14:48 crc kubenswrapper[4760]: I0123 19:14:48.599948 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d"} Jan 23 19:14:48 crc kubenswrapper[4760]: I0123 19:14:48.600489 4760 scope.go:117] "RemoveContainer" containerID="ac5442c294bd54a0f78f09184ffcc4d860f0094ba9d19bb5a0133de731eb1c04" Jan 23 19:14:49 crc kubenswrapper[4760]: E0123 19:14:49.554288 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:14:49 crc kubenswrapper[4760]: I0123 19:14:49.611738 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:14:49 crc kubenswrapper[4760]: E0123 19:14:49.612105 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:14:53 crc kubenswrapper[4760]: I0123 19:14:53.667904 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/util/0.log" Jan 23 19:14:53 crc kubenswrapper[4760]: I0123 19:14:53.927811 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/pull/0.log" Jan 23 19:14:53 crc kubenswrapper[4760]: I0123 19:14:53.941727 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/pull/0.log" Jan 23 19:14:53 crc kubenswrapper[4760]: I0123 19:14:53.954920 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/util/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.137387 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/pull/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.167306 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/extract/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.247695 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/util/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.335040 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/util/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.559122 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/util/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.621109 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/pull/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.646395 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/pull/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.776472 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/extract/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.784080 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/util/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.804231 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/pull/0.log" Jan 23 19:14:54 crc kubenswrapper[4760]: I0123 19:14:54.963418 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-utilities/0.log" Jan 23 19:14:55 crc kubenswrapper[4760]: I0123 19:14:55.115143 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-utilities/0.log" Jan 23 19:14:55 crc kubenswrapper[4760]: I0123 19:14:55.119098 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-content/0.log" Jan 23 19:14:55 crc kubenswrapper[4760]: I0123 19:14:55.171966 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-content/0.log" Jan 23 19:14:55 crc kubenswrapper[4760]: I0123 19:14:55.339135 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-content/0.log" Jan 23 19:14:55 crc kubenswrapper[4760]: I0123 19:14:55.388054 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-utilities/0.log" Jan 23 19:14:55 crc kubenswrapper[4760]: I0123 19:14:55.590037 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-utilities/0.log" Jan 23 19:14:55 crc kubenswrapper[4760]: I0123 19:14:55.942725 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-utilities/0.log" Jan 23 19:14:55 crc kubenswrapper[4760]: I0123 19:14:55.964985 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-content/0.log" Jan 23 19:14:55 crc kubenswrapper[4760]: I0123 19:14:55.985771 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/registry-server/0.log" Jan 23 19:14:56 crc kubenswrapper[4760]: I0123 19:14:56.019572 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-content/0.log" Jan 23 19:14:56 crc kubenswrapper[4760]: I0123 19:14:56.212235 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-utilities/0.log" Jan 23 19:14:56 crc kubenswrapper[4760]: I0123 19:14:56.254968 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-content/0.log" Jan 23 19:14:56 crc kubenswrapper[4760]: I0123 19:14:56.432207 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2llpp_7dc5cdf6-ad52-4c3b-a100-08709b3e06c6/marketplace-operator/0.log" Jan 23 19:14:56 crc kubenswrapper[4760]: I0123 19:14:56.603485 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-utilities/0.log" Jan 23 19:14:56 crc kubenswrapper[4760]: I0123 19:14:56.933263 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/registry-server/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.042934 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-content/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.059648 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-utilities/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.105043 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-content/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.299704 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-content/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.378151 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-utilities/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.462621 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/registry-server/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.522990 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-utilities/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.718121 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-content/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.764506 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-utilities/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.779655 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-content/0.log" Jan 23 19:14:57 crc kubenswrapper[4760]: I0123 19:14:57.956381 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-utilities/0.log" Jan 23 19:14:58 crc kubenswrapper[4760]: I0123 19:14:58.009436 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-content/0.log" Jan 23 19:14:58 crc kubenswrapper[4760]: I0123 19:14:58.561510 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/registry-server/0.log" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.174868 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp"] Jan 23 19:15:00 crc kubenswrapper[4760]: E0123 19:15:00.175674 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206d58d6-18e5-4e8a-b485-01858f605b88" containerName="extract-content" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.175694 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="206d58d6-18e5-4e8a-b485-01858f605b88" containerName="extract-content" Jan 23 19:15:00 crc kubenswrapper[4760]: E0123 19:15:00.175733 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206d58d6-18e5-4e8a-b485-01858f605b88" containerName="extract-utilities" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.175743 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="206d58d6-18e5-4e8a-b485-01858f605b88" containerName="extract-utilities" Jan 23 19:15:00 crc kubenswrapper[4760]: E0123 19:15:00.175761 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206d58d6-18e5-4e8a-b485-01858f605b88" containerName="registry-server" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.175768 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="206d58d6-18e5-4e8a-b485-01858f605b88" containerName="registry-server" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.175950 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="206d58d6-18e5-4e8a-b485-01858f605b88" containerName="registry-server" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.176772 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.178651 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.179985 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.184871 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp"] Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.266600 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-config-volume\") pod \"collect-profiles-29486595-wxxsp\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.266958 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbl2d\" (UniqueName: \"kubernetes.io/projected/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-kube-api-access-mbl2d\") pod \"collect-profiles-29486595-wxxsp\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.267056 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-secret-volume\") pod \"collect-profiles-29486595-wxxsp\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.368630 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbl2d\" (UniqueName: \"kubernetes.io/projected/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-kube-api-access-mbl2d\") pod \"collect-profiles-29486595-wxxsp\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.368923 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-secret-volume\") pod \"collect-profiles-29486595-wxxsp\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.369063 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-config-volume\") pod \"collect-profiles-29486595-wxxsp\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.369985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-config-volume\") pod \"collect-profiles-29486595-wxxsp\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.487506 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-secret-volume\") pod \"collect-profiles-29486595-wxxsp\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.487626 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbl2d\" (UniqueName: \"kubernetes.io/projected/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-kube-api-access-mbl2d\") pod \"collect-profiles-29486595-wxxsp\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.501296 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.599738 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:15:00 crc kubenswrapper[4760]: E0123 19:15:00.600376 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:15:00 crc kubenswrapper[4760]: I0123 19:15:00.951723 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp"] Jan 23 19:15:01 crc kubenswrapper[4760]: I0123 19:15:01.709280 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" event={"ID":"8f6658ed-02fe-41f7-ad0f-2e9903e703dd","Type":"ContainerStarted","Data":"ab8c3ad6ffc4e8a8bb420af6aa0a846b738e277252e64ef8dd4e83acd5cb2d45"} Jan 23 19:15:01 crc kubenswrapper[4760]: I0123 19:15:01.709687 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" event={"ID":"8f6658ed-02fe-41f7-ad0f-2e9903e703dd","Type":"ContainerStarted","Data":"26055f9ce49cef6123487303b50ca9b971bf0e85bfcdf1f71574c2c1f401ec21"} Jan 23 19:15:01 crc kubenswrapper[4760]: I0123 19:15:01.728014 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" podStartSLOduration=1.727990465 podStartE2EDuration="1.727990465s" podCreationTimestamp="2026-01-23 19:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:15:01.724084268 +0000 UTC m=+4444.726542211" watchObservedRunningTime="2026-01-23 19:15:01.727990465 +0000 UTC m=+4444.730448398" Jan 23 19:15:02 crc kubenswrapper[4760]: I0123 19:15:02.734480 4760 generic.go:334] "Generic (PLEG): container finished" podID="8f6658ed-02fe-41f7-ad0f-2e9903e703dd" containerID="ab8c3ad6ffc4e8a8bb420af6aa0a846b738e277252e64ef8dd4e83acd5cb2d45" exitCode=0 Jan 23 19:15:02 crc kubenswrapper[4760]: I0123 19:15:02.734536 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" event={"ID":"8f6658ed-02fe-41f7-ad0f-2e9903e703dd","Type":"ContainerDied","Data":"ab8c3ad6ffc4e8a8bb420af6aa0a846b738e277252e64ef8dd4e83acd5cb2d45"} Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.085378 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.143733 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-config-volume\") pod \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.144440 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f6658ed-02fe-41f7-ad0f-2e9903e703dd" (UID: "8f6658ed-02fe-41f7-ad0f-2e9903e703dd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.245155 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-secret-volume\") pod \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.245305 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbl2d\" (UniqueName: \"kubernetes.io/projected/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-kube-api-access-mbl2d\") pod \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\" (UID: \"8f6658ed-02fe-41f7-ad0f-2e9903e703dd\") " Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.245708 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.251790 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f6658ed-02fe-41f7-ad0f-2e9903e703dd" (UID: "8f6658ed-02fe-41f7-ad0f-2e9903e703dd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.255370 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-kube-api-access-mbl2d" (OuterVolumeSpecName: "kube-api-access-mbl2d") pod "8f6658ed-02fe-41f7-ad0f-2e9903e703dd" (UID: "8f6658ed-02fe-41f7-ad0f-2e9903e703dd"). InnerVolumeSpecName "kube-api-access-mbl2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.347135 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.347185 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbl2d\" (UniqueName: \"kubernetes.io/projected/8f6658ed-02fe-41f7-ad0f-2e9903e703dd-kube-api-access-mbl2d\") on node \"crc\" DevicePath \"\"" Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.751920 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" event={"ID":"8f6658ed-02fe-41f7-ad0f-2e9903e703dd","Type":"ContainerDied","Data":"26055f9ce49cef6123487303b50ca9b971bf0e85bfcdf1f71574c2c1f401ec21"} Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.751961 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26055f9ce49cef6123487303b50ca9b971bf0e85bfcdf1f71574c2c1f401ec21" Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.751999 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-wxxsp" Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.808985 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs"] Jan 23 19:15:04 crc kubenswrapper[4760]: I0123 19:15:04.817060 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-dztzs"] Jan 23 19:15:05 crc kubenswrapper[4760]: I0123 19:15:05.611722 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4387fcd5-317a-4080-ab95-ef7a8c15fe51" path="/var/lib/kubelet/pods/4387fcd5-317a-4080-ab95-ef7a8c15fe51/volumes" Jan 23 19:15:08 crc kubenswrapper[4760]: I0123 19:15:08.867999 4760 scope.go:117] "RemoveContainer" containerID="8a45a87c0ebf2b8362871de646d086107968ef47626638d2ad8591d834e766ca" Jan 23 19:15:14 crc kubenswrapper[4760]: I0123 19:15:14.595198 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:15:14 crc kubenswrapper[4760]: E0123 19:15:14.597051 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:15:29 crc kubenswrapper[4760]: I0123 19:15:29.594976 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:15:29 crc kubenswrapper[4760]: E0123 19:15:29.595740 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:15:42 crc kubenswrapper[4760]: I0123 19:15:42.595321 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:15:42 crc kubenswrapper[4760]: E0123 19:15:42.596227 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:15:56 crc kubenswrapper[4760]: I0123 19:15:56.596997 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:15:56 crc kubenswrapper[4760]: E0123 19:15:56.597697 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:16:10 crc kubenswrapper[4760]: I0123 19:16:10.595552 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:16:10 crc kubenswrapper[4760]: E0123 19:16:10.596208 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:16:22 crc kubenswrapper[4760]: I0123 19:16:22.595228 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:16:22 crc kubenswrapper[4760]: E0123 19:16:22.596025 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:16:35 crc kubenswrapper[4760]: I0123 19:16:35.597575 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:16:35 crc kubenswrapper[4760]: E0123 19:16:35.598332 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:16:47 crc kubenswrapper[4760]: I0123 19:16:47.620610 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:16:47 crc kubenswrapper[4760]: E0123 19:16:47.621902 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:16:58 crc kubenswrapper[4760]: I0123 19:16:58.782371 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z88p4"] Jan 23 19:16:58 crc kubenswrapper[4760]: E0123 19:16:58.783492 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f6658ed-02fe-41f7-ad0f-2e9903e703dd" containerName="collect-profiles" Jan 23 19:16:58 crc kubenswrapper[4760]: I0123 19:16:58.783512 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f6658ed-02fe-41f7-ad0f-2e9903e703dd" containerName="collect-profiles" Jan 23 19:16:58 crc kubenswrapper[4760]: I0123 19:16:58.783775 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f6658ed-02fe-41f7-ad0f-2e9903e703dd" containerName="collect-profiles" Jan 23 19:16:58 crc kubenswrapper[4760]: I0123 19:16:58.785186 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:58 crc kubenswrapper[4760]: I0123 19:16:58.798461 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z88p4"] Jan 23 19:16:58 crc kubenswrapper[4760]: I0123 19:16:58.969363 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-catalog-content\") pod \"redhat-operators-z88p4\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:58 crc kubenswrapper[4760]: I0123 19:16:58.969560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-utilities\") pod \"redhat-operators-z88p4\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:58 crc kubenswrapper[4760]: I0123 19:16:58.969619 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twv8t\" (UniqueName: \"kubernetes.io/projected/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-kube-api-access-twv8t\") pod \"redhat-operators-z88p4\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.071381 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-utilities\") pod \"redhat-operators-z88p4\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.071507 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twv8t\" (UniqueName: \"kubernetes.io/projected/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-kube-api-access-twv8t\") pod \"redhat-operators-z88p4\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.071639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-catalog-content\") pod \"redhat-operators-z88p4\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.071896 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-utilities\") pod \"redhat-operators-z88p4\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.071959 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-catalog-content\") pod \"redhat-operators-z88p4\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.107058 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twv8t\" (UniqueName: \"kubernetes.io/projected/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-kube-api-access-twv8t\") pod \"redhat-operators-z88p4\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.119236 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.595170 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:16:59 crc kubenswrapper[4760]: E0123 19:16:59.595977 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.613263 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z88p4"] Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.832798 4760 generic.go:334] "Generic (PLEG): container finished" podID="72aaafbc-91a2-433a-b56c-15c2e96731ee" containerID="bbf4ae6ed6af773b60503ca375674178ee2553473f08f3295a1f12a9d09362c3" exitCode=0 Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.832903 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hq4g9/must-gather-f7spt" event={"ID":"72aaafbc-91a2-433a-b56c-15c2e96731ee","Type":"ContainerDied","Data":"bbf4ae6ed6af773b60503ca375674178ee2553473f08f3295a1f12a9d09362c3"} Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.833782 4760 scope.go:117] "RemoveContainer" containerID="bbf4ae6ed6af773b60503ca375674178ee2553473f08f3295a1f12a9d09362c3" Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.836855 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerID="7b273f508f5ad10d146216c4405750f9a4253ee447922360724f6b23d1790a3b" exitCode=0 Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.836896 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z88p4" event={"ID":"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8","Type":"ContainerDied","Data":"7b273f508f5ad10d146216c4405750f9a4253ee447922360724f6b23d1790a3b"} Jan 23 19:16:59 crc kubenswrapper[4760]: I0123 19:16:59.836924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z88p4" event={"ID":"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8","Type":"ContainerStarted","Data":"96256a9153e4e58f30a5be02b7fa764ad7de484584ca036fcc057c6b8e843809"} Jan 23 19:17:00 crc kubenswrapper[4760]: I0123 19:17:00.621775 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hq4g9_must-gather-f7spt_72aaafbc-91a2-433a-b56c-15c2e96731ee/gather/0.log" Jan 23 19:17:00 crc kubenswrapper[4760]: I0123 19:17:00.847342 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z88p4" event={"ID":"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8","Type":"ContainerStarted","Data":"8a20b082ce02df07432e3df23136a836ec387b23e7ce91d1b86f64068f806b83"} Jan 23 19:17:01 crc kubenswrapper[4760]: I0123 19:17:01.858131 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerID="8a20b082ce02df07432e3df23136a836ec387b23e7ce91d1b86f64068f806b83" exitCode=0 Jan 23 19:17:01 crc kubenswrapper[4760]: I0123 19:17:01.858363 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z88p4" event={"ID":"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8","Type":"ContainerDied","Data":"8a20b082ce02df07432e3df23136a836ec387b23e7ce91d1b86f64068f806b83"} Jan 23 19:17:02 crc kubenswrapper[4760]: I0123 19:17:02.872695 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z88p4" event={"ID":"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8","Type":"ContainerStarted","Data":"9967801f208e78d4fc114bdebe7dbb643f23fb62594e8654fd1bd0a3e3e0f2ed"} Jan 23 19:17:02 crc kubenswrapper[4760]: I0123 19:17:02.894982 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z88p4" podStartSLOduration=2.259706708 podStartE2EDuration="4.894960576s" podCreationTimestamp="2026-01-23 19:16:58 +0000 UTC" firstStartedPulling="2026-01-23 19:16:59.838643054 +0000 UTC m=+4562.841100987" lastFinishedPulling="2026-01-23 19:17:02.473896912 +0000 UTC m=+4565.476354855" observedRunningTime="2026-01-23 19:17:02.88743385 +0000 UTC m=+4565.889891793" watchObservedRunningTime="2026-01-23 19:17:02.894960576 +0000 UTC m=+4565.897418509" Jan 23 19:17:09 crc kubenswrapper[4760]: I0123 19:17:09.119452 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:17:09 crc kubenswrapper[4760]: I0123 19:17:09.120767 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:17:09 crc kubenswrapper[4760]: I0123 19:17:09.168821 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:17:09 crc kubenswrapper[4760]: I0123 19:17:09.237710 4760 scope.go:117] "RemoveContainer" containerID="f5094091fbf40aa0d6ca656324cc4a41262bbed6da90859d8b34d38824446ef0" Jan 23 19:17:09 crc kubenswrapper[4760]: I0123 19:17:09.617288 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hq4g9/must-gather-f7spt"] Jan 23 19:17:09 crc kubenswrapper[4760]: I0123 19:17:09.617606 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hq4g9/must-gather-f7spt" podUID="72aaafbc-91a2-433a-b56c-15c2e96731ee" containerName="copy" containerID="cri-o://3060f6af6ef1ba16199a60ce6c86536b8f63532d9a7e1f0f13405c1536828b54" gracePeriod=2 Jan 23 19:17:09 crc kubenswrapper[4760]: I0123 19:17:09.628855 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hq4g9/must-gather-f7spt"] Jan 23 19:17:09 crc kubenswrapper[4760]: I0123 19:17:09.944897 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hq4g9_must-gather-f7spt_72aaafbc-91a2-433a-b56c-15c2e96731ee/copy/0.log" Jan 23 19:17:09 crc kubenswrapper[4760]: I0123 19:17:09.946163 4760 generic.go:334] "Generic (PLEG): container finished" podID="72aaafbc-91a2-433a-b56c-15c2e96731ee" containerID="3060f6af6ef1ba16199a60ce6c86536b8f63532d9a7e1f0f13405c1536828b54" exitCode=143 Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.032693 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.149674 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hq4g9_must-gather-f7spt_72aaafbc-91a2-433a-b56c-15c2e96731ee/copy/0.log" Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.150209 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/must-gather-f7spt" Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.315125 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/72aaafbc-91a2-433a-b56c-15c2e96731ee-must-gather-output\") pod \"72aaafbc-91a2-433a-b56c-15c2e96731ee\" (UID: \"72aaafbc-91a2-433a-b56c-15c2e96731ee\") " Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.315232 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9vww\" (UniqueName: \"kubernetes.io/projected/72aaafbc-91a2-433a-b56c-15c2e96731ee-kube-api-access-j9vww\") pod \"72aaafbc-91a2-433a-b56c-15c2e96731ee\" (UID: \"72aaafbc-91a2-433a-b56c-15c2e96731ee\") " Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.324306 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72aaafbc-91a2-433a-b56c-15c2e96731ee-kube-api-access-j9vww" (OuterVolumeSpecName: "kube-api-access-j9vww") pod "72aaafbc-91a2-433a-b56c-15c2e96731ee" (UID: "72aaafbc-91a2-433a-b56c-15c2e96731ee"). InnerVolumeSpecName "kube-api-access-j9vww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.422076 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9vww\" (UniqueName: \"kubernetes.io/projected/72aaafbc-91a2-433a-b56c-15c2e96731ee-kube-api-access-j9vww\") on node \"crc\" DevicePath \"\"" Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.512532 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72aaafbc-91a2-433a-b56c-15c2e96731ee-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "72aaafbc-91a2-433a-b56c-15c2e96731ee" (UID: "72aaafbc-91a2-433a-b56c-15c2e96731ee"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.523547 4760 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/72aaafbc-91a2-433a-b56c-15c2e96731ee-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.955146 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hq4g9_must-gather-f7spt_72aaafbc-91a2-433a-b56c-15c2e96731ee/copy/0.log" Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.955631 4760 scope.go:117] "RemoveContainer" containerID="3060f6af6ef1ba16199a60ce6c86536b8f63532d9a7e1f0f13405c1536828b54" Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.955666 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hq4g9/must-gather-f7spt" Jan 23 19:17:10 crc kubenswrapper[4760]: I0123 19:17:10.976687 4760 scope.go:117] "RemoveContainer" containerID="bbf4ae6ed6af773b60503ca375674178ee2553473f08f3295a1f12a9d09362c3" Jan 23 19:17:11 crc kubenswrapper[4760]: I0123 19:17:11.589151 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z88p4"] Jan 23 19:17:11 crc kubenswrapper[4760]: I0123 19:17:11.606157 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72aaafbc-91a2-433a-b56c-15c2e96731ee" path="/var/lib/kubelet/pods/72aaafbc-91a2-433a-b56c-15c2e96731ee/volumes" Jan 23 19:17:11 crc kubenswrapper[4760]: I0123 19:17:11.966817 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z88p4" podUID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerName="registry-server" containerID="cri-o://9967801f208e78d4fc114bdebe7dbb643f23fb62594e8654fd1bd0a3e3e0f2ed" gracePeriod=2 Jan 23 19:17:13 crc kubenswrapper[4760]: I0123 19:17:13.595211 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:17:13 crc kubenswrapper[4760]: E0123 19:17:13.595581 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:17:13 crc kubenswrapper[4760]: I0123 19:17:13.985738 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerID="9967801f208e78d4fc114bdebe7dbb643f23fb62594e8654fd1bd0a3e3e0f2ed" exitCode=0 Jan 23 19:17:13 crc kubenswrapper[4760]: I0123 19:17:13.985789 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z88p4" event={"ID":"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8","Type":"ContainerDied","Data":"9967801f208e78d4fc114bdebe7dbb643f23fb62594e8654fd1bd0a3e3e0f2ed"} Jan 23 19:17:14 crc kubenswrapper[4760]: I0123 19:17:14.396175 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:17:14 crc kubenswrapper[4760]: I0123 19:17:14.502587 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-catalog-content\") pod \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " Jan 23 19:17:14 crc kubenswrapper[4760]: I0123 19:17:14.502727 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-utilities\") pod \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " Jan 23 19:17:14 crc kubenswrapper[4760]: I0123 19:17:14.503111 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twv8t\" (UniqueName: \"kubernetes.io/projected/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-kube-api-access-twv8t\") pod \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\" (UID: \"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8\") " Jan 23 19:17:14 crc kubenswrapper[4760]: I0123 19:17:14.506705 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-utilities" (OuterVolumeSpecName: "utilities") pod "f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" (UID: "f6ca3313-4d17-4b12-8d84-a88be6e9e3c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:17:14 crc kubenswrapper[4760]: I0123 19:17:14.512861 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-kube-api-access-twv8t" (OuterVolumeSpecName: "kube-api-access-twv8t") pod "f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" (UID: "f6ca3313-4d17-4b12-8d84-a88be6e9e3c8"). InnerVolumeSpecName "kube-api-access-twv8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:17:14 crc kubenswrapper[4760]: I0123 19:17:14.608984 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:17:14 crc kubenswrapper[4760]: I0123 19:17:14.609331 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twv8t\" (UniqueName: \"kubernetes.io/projected/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-kube-api-access-twv8t\") on node \"crc\" DevicePath \"\"" Jan 23 19:17:14 crc kubenswrapper[4760]: I0123 19:17:14.646577 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" (UID: "f6ca3313-4d17-4b12-8d84-a88be6e9e3c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:17:14 crc kubenswrapper[4760]: I0123 19:17:14.712264 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:17:15 crc kubenswrapper[4760]: I0123 19:17:15.019395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z88p4" event={"ID":"f6ca3313-4d17-4b12-8d84-a88be6e9e3c8","Type":"ContainerDied","Data":"96256a9153e4e58f30a5be02b7fa764ad7de484584ca036fcc057c6b8e843809"} Jan 23 19:17:15 crc kubenswrapper[4760]: I0123 19:17:15.019461 4760 scope.go:117] "RemoveContainer" containerID="9967801f208e78d4fc114bdebe7dbb643f23fb62594e8654fd1bd0a3e3e0f2ed" Jan 23 19:17:15 crc kubenswrapper[4760]: I0123 19:17:15.019538 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z88p4" Jan 23 19:17:15 crc kubenswrapper[4760]: I0123 19:17:15.038616 4760 scope.go:117] "RemoveContainer" containerID="8a20b082ce02df07432e3df23136a836ec387b23e7ce91d1b86f64068f806b83" Jan 23 19:17:15 crc kubenswrapper[4760]: I0123 19:17:15.053900 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z88p4"] Jan 23 19:17:15 crc kubenswrapper[4760]: I0123 19:17:15.065293 4760 scope.go:117] "RemoveContainer" containerID="7b273f508f5ad10d146216c4405750f9a4253ee447922360724f6b23d1790a3b" Jan 23 19:17:15 crc kubenswrapper[4760]: I0123 19:17:15.070185 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z88p4"] Jan 23 19:17:15 crc kubenswrapper[4760]: I0123 19:17:15.605582 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" path="/var/lib/kubelet/pods/f6ca3313-4d17-4b12-8d84-a88be6e9e3c8/volumes" Jan 23 19:17:27 crc kubenswrapper[4760]: I0123 19:17:27.601465 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:17:27 crc kubenswrapper[4760]: E0123 19:17:27.602200 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:17:42 crc kubenswrapper[4760]: I0123 19:17:42.596112 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:17:42 crc kubenswrapper[4760]: E0123 19:17:42.597151 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:17:55 crc kubenswrapper[4760]: I0123 19:17:55.595917 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:17:55 crc kubenswrapper[4760]: E0123 19:17:55.598717 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:18:07 crc kubenswrapper[4760]: I0123 19:18:07.601755 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:18:07 crc kubenswrapper[4760]: E0123 19:18:07.602575 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:18:09 crc kubenswrapper[4760]: I0123 19:18:09.291619 4760 scope.go:117] "RemoveContainer" containerID="19f20280b04c1431de8e943ad7868f1b09da22738fcbc7c1cd13b8f9d11e1449" Jan 23 19:18:21 crc kubenswrapper[4760]: I0123 19:18:21.595818 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:18:21 crc kubenswrapper[4760]: E0123 19:18:21.596630 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:18:34 crc kubenswrapper[4760]: I0123 19:18:34.595265 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:18:34 crc kubenswrapper[4760]: E0123 19:18:34.596001 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:18:46 crc kubenswrapper[4760]: I0123 19:18:46.595963 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:18:46 crc kubenswrapper[4760]: E0123 19:18:46.596764 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:18:59 crc kubenswrapper[4760]: I0123 19:18:59.595932 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:18:59 crc kubenswrapper[4760]: E0123 19:18:59.596794 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:19:12 crc kubenswrapper[4760]: I0123 19:19:12.595184 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:19:12 crc kubenswrapper[4760]: E0123 19:19:12.595984 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:19:27 crc kubenswrapper[4760]: I0123 19:19:27.602837 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:19:27 crc kubenswrapper[4760]: E0123 19:19:27.603856 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:19:39 crc kubenswrapper[4760]: I0123 19:19:39.615571 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:19:39 crc kubenswrapper[4760]: E0123 19:19:39.618546 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.831896 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zrb6m"] Jan 23 19:19:44 crc kubenswrapper[4760]: E0123 19:19:44.833003 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72aaafbc-91a2-433a-b56c-15c2e96731ee" containerName="copy" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.833024 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="72aaafbc-91a2-433a-b56c-15c2e96731ee" containerName="copy" Jan 23 19:19:44 crc kubenswrapper[4760]: E0123 19:19:44.833054 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerName="registry-server" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.833064 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerName="registry-server" Jan 23 19:19:44 crc kubenswrapper[4760]: E0123 19:19:44.833086 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72aaafbc-91a2-433a-b56c-15c2e96731ee" containerName="gather" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.833094 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="72aaafbc-91a2-433a-b56c-15c2e96731ee" containerName="gather" Jan 23 19:19:44 crc kubenswrapper[4760]: E0123 19:19:44.833133 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerName="extract-utilities" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.833145 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerName="extract-utilities" Jan 23 19:19:44 crc kubenswrapper[4760]: E0123 19:19:44.833160 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerName="extract-content" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.833175 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerName="extract-content" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.833392 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="72aaafbc-91a2-433a-b56c-15c2e96731ee" containerName="copy" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.833434 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="72aaafbc-91a2-433a-b56c-15c2e96731ee" containerName="gather" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.833456 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ca3313-4d17-4b12-8d84-a88be6e9e3c8" containerName="registry-server" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.835209 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.846885 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrb6m"] Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.955245 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-catalog-content\") pod \"redhat-marketplace-zrb6m\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.955356 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-utilities\") pod \"redhat-marketplace-zrb6m\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:44 crc kubenswrapper[4760]: I0123 19:19:44.955519 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czvbp\" (UniqueName: \"kubernetes.io/projected/cd343a9b-98dc-4492-acaa-1c5a38295464-kube-api-access-czvbp\") pod \"redhat-marketplace-zrb6m\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:45 crc kubenswrapper[4760]: I0123 19:19:45.057711 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-catalog-content\") pod \"redhat-marketplace-zrb6m\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:45 crc kubenswrapper[4760]: I0123 19:19:45.057792 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-utilities\") pod \"redhat-marketplace-zrb6m\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:45 crc kubenswrapper[4760]: I0123 19:19:45.057915 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czvbp\" (UniqueName: \"kubernetes.io/projected/cd343a9b-98dc-4492-acaa-1c5a38295464-kube-api-access-czvbp\") pod \"redhat-marketplace-zrb6m\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:45 crc kubenswrapper[4760]: I0123 19:19:45.058202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-catalog-content\") pod \"redhat-marketplace-zrb6m\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:45 crc kubenswrapper[4760]: I0123 19:19:45.058632 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-utilities\") pod \"redhat-marketplace-zrb6m\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:45 crc kubenswrapper[4760]: I0123 19:19:45.087008 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czvbp\" (UniqueName: \"kubernetes.io/projected/cd343a9b-98dc-4492-acaa-1c5a38295464-kube-api-access-czvbp\") pod \"redhat-marketplace-zrb6m\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:45 crc kubenswrapper[4760]: I0123 19:19:45.160554 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:45 crc kubenswrapper[4760]: I0123 19:19:45.648758 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrb6m"] Jan 23 19:19:45 crc kubenswrapper[4760]: W0123 19:19:45.659948 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd343a9b_98dc_4492_acaa_1c5a38295464.slice/crio-dc61094b681fda4902b8d3bd2bce35c4dcbec911f01efa8b5de79f9b17191d29 WatchSource:0}: Error finding container dc61094b681fda4902b8d3bd2bce35c4dcbec911f01efa8b5de79f9b17191d29: Status 404 returned error can't find the container with id dc61094b681fda4902b8d3bd2bce35c4dcbec911f01efa8b5de79f9b17191d29 Jan 23 19:19:46 crc kubenswrapper[4760]: I0123 19:19:46.343511 4760 generic.go:334] "Generic (PLEG): container finished" podID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerID="e0499e792a9a80b3d0f838591010c1d5968a40a81c35e16f8fa7874c6558c9bd" exitCode=0 Jan 23 19:19:46 crc kubenswrapper[4760]: I0123 19:19:46.343839 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrb6m" event={"ID":"cd343a9b-98dc-4492-acaa-1c5a38295464","Type":"ContainerDied","Data":"e0499e792a9a80b3d0f838591010c1d5968a40a81c35e16f8fa7874c6558c9bd"} Jan 23 19:19:46 crc kubenswrapper[4760]: I0123 19:19:46.343867 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrb6m" event={"ID":"cd343a9b-98dc-4492-acaa-1c5a38295464","Type":"ContainerStarted","Data":"dc61094b681fda4902b8d3bd2bce35c4dcbec911f01efa8b5de79f9b17191d29"} Jan 23 19:19:46 crc kubenswrapper[4760]: I0123 19:19:46.350280 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:19:47 crc kubenswrapper[4760]: I0123 19:19:47.354892 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrb6m" event={"ID":"cd343a9b-98dc-4492-acaa-1c5a38295464","Type":"ContainerStarted","Data":"be7875946aa0087af5f4a768f5391980a94180ca6234de51540ee742afdefaca"} Jan 23 19:19:48 crc kubenswrapper[4760]: I0123 19:19:48.365003 4760 generic.go:334] "Generic (PLEG): container finished" podID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerID="be7875946aa0087af5f4a768f5391980a94180ca6234de51540ee742afdefaca" exitCode=0 Jan 23 19:19:48 crc kubenswrapper[4760]: I0123 19:19:48.365078 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrb6m" event={"ID":"cd343a9b-98dc-4492-acaa-1c5a38295464","Type":"ContainerDied","Data":"be7875946aa0087af5f4a768f5391980a94180ca6234de51540ee742afdefaca"} Jan 23 19:19:49 crc kubenswrapper[4760]: I0123 19:19:49.378208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrb6m" event={"ID":"cd343a9b-98dc-4492-acaa-1c5a38295464","Type":"ContainerStarted","Data":"1e6a501896d4f746d700107e62ad2f1c42f15ebe3ead39af23149cf9c86c12d4"} Jan 23 19:19:49 crc kubenswrapper[4760]: I0123 19:19:49.405714 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zrb6m" podStartSLOduration=2.983579119 podStartE2EDuration="5.405690887s" podCreationTimestamp="2026-01-23 19:19:44 +0000 UTC" firstStartedPulling="2026-01-23 19:19:46.350009003 +0000 UTC m=+4729.352466936" lastFinishedPulling="2026-01-23 19:19:48.772120771 +0000 UTC m=+4731.774578704" observedRunningTime="2026-01-23 19:19:49.397486151 +0000 UTC m=+4732.399944094" watchObservedRunningTime="2026-01-23 19:19:49.405690887 +0000 UTC m=+4732.408148820" Jan 23 19:19:52 crc kubenswrapper[4760]: I0123 19:19:52.594937 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:19:53 crc kubenswrapper[4760]: I0123 19:19:53.414590 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"b05418f9558272a495fa54d74832f4d63d76f79b825371eb93640844261e899f"} Jan 23 19:19:55 crc kubenswrapper[4760]: I0123 19:19:55.161583 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:55 crc kubenswrapper[4760]: I0123 19:19:55.163388 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:55 crc kubenswrapper[4760]: I0123 19:19:55.218655 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:55 crc kubenswrapper[4760]: I0123 19:19:55.485912 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:55 crc kubenswrapper[4760]: I0123 19:19:55.533494 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrb6m"] Jan 23 19:19:57 crc kubenswrapper[4760]: I0123 19:19:57.445431 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zrb6m" podUID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerName="registry-server" containerID="cri-o://1e6a501896d4f746d700107e62ad2f1c42f15ebe3ead39af23149cf9c86c12d4" gracePeriod=2 Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.454812 4760 generic.go:334] "Generic (PLEG): container finished" podID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerID="1e6a501896d4f746d700107e62ad2f1c42f15ebe3ead39af23149cf9c86c12d4" exitCode=0 Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.454877 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrb6m" event={"ID":"cd343a9b-98dc-4492-acaa-1c5a38295464","Type":"ContainerDied","Data":"1e6a501896d4f746d700107e62ad2f1c42f15ebe3ead39af23149cf9c86c12d4"} Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.664459 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.742189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czvbp\" (UniqueName: \"kubernetes.io/projected/cd343a9b-98dc-4492-acaa-1c5a38295464-kube-api-access-czvbp\") pod \"cd343a9b-98dc-4492-acaa-1c5a38295464\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.742246 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-catalog-content\") pod \"cd343a9b-98dc-4492-acaa-1c5a38295464\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.742440 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-utilities\") pod \"cd343a9b-98dc-4492-acaa-1c5a38295464\" (UID: \"cd343a9b-98dc-4492-acaa-1c5a38295464\") " Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.743614 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-utilities" (OuterVolumeSpecName: "utilities") pod "cd343a9b-98dc-4492-acaa-1c5a38295464" (UID: "cd343a9b-98dc-4492-acaa-1c5a38295464"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.753624 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd343a9b-98dc-4492-acaa-1c5a38295464-kube-api-access-czvbp" (OuterVolumeSpecName: "kube-api-access-czvbp") pod "cd343a9b-98dc-4492-acaa-1c5a38295464" (UID: "cd343a9b-98dc-4492-acaa-1c5a38295464"). InnerVolumeSpecName "kube-api-access-czvbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.771401 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd343a9b-98dc-4492-acaa-1c5a38295464" (UID: "cd343a9b-98dc-4492-acaa-1c5a38295464"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.844871 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.844914 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czvbp\" (UniqueName: \"kubernetes.io/projected/cd343a9b-98dc-4492-acaa-1c5a38295464-kube-api-access-czvbp\") on node \"crc\" DevicePath \"\"" Jan 23 19:19:58 crc kubenswrapper[4760]: I0123 19:19:58.844929 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd343a9b-98dc-4492-acaa-1c5a38295464-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:19:59 crc kubenswrapper[4760]: I0123 19:19:59.465024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrb6m" event={"ID":"cd343a9b-98dc-4492-acaa-1c5a38295464","Type":"ContainerDied","Data":"dc61094b681fda4902b8d3bd2bce35c4dcbec911f01efa8b5de79f9b17191d29"} Jan 23 19:19:59 crc kubenswrapper[4760]: I0123 19:19:59.465299 4760 scope.go:117] "RemoveContainer" containerID="1e6a501896d4f746d700107e62ad2f1c42f15ebe3ead39af23149cf9c86c12d4" Jan 23 19:19:59 crc kubenswrapper[4760]: I0123 19:19:59.465055 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrb6m" Jan 23 19:19:59 crc kubenswrapper[4760]: I0123 19:19:59.490675 4760 scope.go:117] "RemoveContainer" containerID="be7875946aa0087af5f4a768f5391980a94180ca6234de51540ee742afdefaca" Jan 23 19:19:59 crc kubenswrapper[4760]: I0123 19:19:59.513667 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrb6m"] Jan 23 19:19:59 crc kubenswrapper[4760]: I0123 19:19:59.516614 4760 scope.go:117] "RemoveContainer" containerID="e0499e792a9a80b3d0f838591010c1d5968a40a81c35e16f8fa7874c6558c9bd" Jan 23 19:19:59 crc kubenswrapper[4760]: I0123 19:19:59.526997 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrb6m"] Jan 23 19:19:59 crc kubenswrapper[4760]: I0123 19:19:59.607381 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd343a9b-98dc-4492-acaa-1c5a38295464" path="/var/lib/kubelet/pods/cd343a9b-98dc-4492-acaa-1c5a38295464/volumes" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.272905 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rgpk7/must-gather-kd2zc"] Jan 23 19:20:12 crc kubenswrapper[4760]: E0123 19:20:12.273682 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerName="extract-utilities" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.273694 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerName="extract-utilities" Jan 23 19:20:12 crc kubenswrapper[4760]: E0123 19:20:12.273720 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerName="registry-server" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.273727 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerName="registry-server" Jan 23 19:20:12 crc kubenswrapper[4760]: E0123 19:20:12.273738 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerName="extract-content" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.273745 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerName="extract-content" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.273907 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd343a9b-98dc-4492-acaa-1c5a38295464" containerName="registry-server" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.274817 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/must-gather-kd2zc" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.277677 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rgpk7"/"openshift-service-ca.crt" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.277768 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rgpk7"/"default-dockercfg-7gndt" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.277971 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rgpk7"/"kube-root-ca.crt" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.354894 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rgpk7/must-gather-kd2zc"] Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.402239 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/891c50bf-efcf-4171-ac9d-15ed375f2f14-must-gather-output\") pod \"must-gather-kd2zc\" (UID: \"891c50bf-efcf-4171-ac9d-15ed375f2f14\") " pod="openshift-must-gather-rgpk7/must-gather-kd2zc" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.402357 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n8kc\" (UniqueName: \"kubernetes.io/projected/891c50bf-efcf-4171-ac9d-15ed375f2f14-kube-api-access-2n8kc\") pod \"must-gather-kd2zc\" (UID: \"891c50bf-efcf-4171-ac9d-15ed375f2f14\") " pod="openshift-must-gather-rgpk7/must-gather-kd2zc" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.504862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n8kc\" (UniqueName: \"kubernetes.io/projected/891c50bf-efcf-4171-ac9d-15ed375f2f14-kube-api-access-2n8kc\") pod \"must-gather-kd2zc\" (UID: \"891c50bf-efcf-4171-ac9d-15ed375f2f14\") " pod="openshift-must-gather-rgpk7/must-gather-kd2zc" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.505074 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/891c50bf-efcf-4171-ac9d-15ed375f2f14-must-gather-output\") pod \"must-gather-kd2zc\" (UID: \"891c50bf-efcf-4171-ac9d-15ed375f2f14\") " pod="openshift-must-gather-rgpk7/must-gather-kd2zc" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.505595 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/891c50bf-efcf-4171-ac9d-15ed375f2f14-must-gather-output\") pod \"must-gather-kd2zc\" (UID: \"891c50bf-efcf-4171-ac9d-15ed375f2f14\") " pod="openshift-must-gather-rgpk7/must-gather-kd2zc" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.523543 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n8kc\" (UniqueName: \"kubernetes.io/projected/891c50bf-efcf-4171-ac9d-15ed375f2f14-kube-api-access-2n8kc\") pod \"must-gather-kd2zc\" (UID: \"891c50bf-efcf-4171-ac9d-15ed375f2f14\") " pod="openshift-must-gather-rgpk7/must-gather-kd2zc" Jan 23 19:20:12 crc kubenswrapper[4760]: I0123 19:20:12.649591 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/must-gather-kd2zc" Jan 23 19:20:13 crc kubenswrapper[4760]: I0123 19:20:13.100482 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rgpk7/must-gather-kd2zc"] Jan 23 19:20:13 crc kubenswrapper[4760]: I0123 19:20:13.583235 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/must-gather-kd2zc" event={"ID":"891c50bf-efcf-4171-ac9d-15ed375f2f14","Type":"ContainerStarted","Data":"4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa"} Jan 23 19:20:13 crc kubenswrapper[4760]: I0123 19:20:13.583539 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/must-gather-kd2zc" event={"ID":"891c50bf-efcf-4171-ac9d-15ed375f2f14","Type":"ContainerStarted","Data":"a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190"} Jan 23 19:20:13 crc kubenswrapper[4760]: I0123 19:20:13.583549 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/must-gather-kd2zc" event={"ID":"891c50bf-efcf-4171-ac9d-15ed375f2f14","Type":"ContainerStarted","Data":"e09ed1f98aaa42793b00da01c04fc83d589d1d13e94ae5836cd4f5e760d3c339"} Jan 23 19:20:13 crc kubenswrapper[4760]: I0123 19:20:13.605067 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rgpk7/must-gather-kd2zc" podStartSLOduration=1.6050361579999999 podStartE2EDuration="1.605036158s" podCreationTimestamp="2026-01-23 19:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:20:13.601205762 +0000 UTC m=+4756.603663695" watchObservedRunningTime="2026-01-23 19:20:13.605036158 +0000 UTC m=+4756.607494101" Jan 23 19:20:16 crc kubenswrapper[4760]: E0123 19:20:16.128570 4760 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.90:51462->38.129.56.90:34677: write tcp 38.129.56.90:51462->38.129.56.90:34677: write: broken pipe Jan 23 19:20:18 crc kubenswrapper[4760]: I0123 19:20:18.303204 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rgpk7/crc-debug-8g6j6"] Jan 23 19:20:18 crc kubenswrapper[4760]: I0123 19:20:18.305122 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" Jan 23 19:20:18 crc kubenswrapper[4760]: I0123 19:20:18.422181 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp49r\" (UniqueName: \"kubernetes.io/projected/1cfc53b2-145a-44bd-8650-21de9e160a06-kube-api-access-mp49r\") pod \"crc-debug-8g6j6\" (UID: \"1cfc53b2-145a-44bd-8650-21de9e160a06\") " pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" Jan 23 19:20:18 crc kubenswrapper[4760]: I0123 19:20:18.422557 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cfc53b2-145a-44bd-8650-21de9e160a06-host\") pod \"crc-debug-8g6j6\" (UID: \"1cfc53b2-145a-44bd-8650-21de9e160a06\") " pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" Jan 23 19:20:18 crc kubenswrapper[4760]: I0123 19:20:18.525031 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp49r\" (UniqueName: \"kubernetes.io/projected/1cfc53b2-145a-44bd-8650-21de9e160a06-kube-api-access-mp49r\") pod \"crc-debug-8g6j6\" (UID: \"1cfc53b2-145a-44bd-8650-21de9e160a06\") " pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" Jan 23 19:20:18 crc kubenswrapper[4760]: I0123 19:20:18.525612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cfc53b2-145a-44bd-8650-21de9e160a06-host\") pod \"crc-debug-8g6j6\" (UID: \"1cfc53b2-145a-44bd-8650-21de9e160a06\") " pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" Jan 23 19:20:18 crc kubenswrapper[4760]: I0123 19:20:18.525717 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cfc53b2-145a-44bd-8650-21de9e160a06-host\") pod \"crc-debug-8g6j6\" (UID: \"1cfc53b2-145a-44bd-8650-21de9e160a06\") " pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" Jan 23 19:20:18 crc kubenswrapper[4760]: I0123 19:20:18.976665 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp49r\" (UniqueName: \"kubernetes.io/projected/1cfc53b2-145a-44bd-8650-21de9e160a06-kube-api-access-mp49r\") pod \"crc-debug-8g6j6\" (UID: \"1cfc53b2-145a-44bd-8650-21de9e160a06\") " pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" Jan 23 19:20:19 crc kubenswrapper[4760]: I0123 19:20:19.236618 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" Jan 23 19:20:19 crc kubenswrapper[4760]: I0123 19:20:19.635451 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" event={"ID":"1cfc53b2-145a-44bd-8650-21de9e160a06","Type":"ContainerStarted","Data":"a5bc20a102a32478252bd7a3073372f998b4a2f9b074e07e15b554e88b73ffd9"} Jan 23 19:20:19 crc kubenswrapper[4760]: I0123 19:20:19.636096 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" event={"ID":"1cfc53b2-145a-44bd-8650-21de9e160a06","Type":"ContainerStarted","Data":"37dfd0f16b89e23816661847a48fc38997e4b18f09835c7d0cd481702bf5cefb"} Jan 23 19:20:19 crc kubenswrapper[4760]: I0123 19:20:19.652158 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" podStartSLOduration=1.652118056 podStartE2EDuration="1.652118056s" podCreationTimestamp="2026-01-23 19:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:20:19.650220844 +0000 UTC m=+4762.652678777" watchObservedRunningTime="2026-01-23 19:20:19.652118056 +0000 UTC m=+4762.654575999" Jan 23 19:20:58 crc kubenswrapper[4760]: I0123 19:20:58.953635 4760 generic.go:334] "Generic (PLEG): container finished" podID="1cfc53b2-145a-44bd-8650-21de9e160a06" containerID="a5bc20a102a32478252bd7a3073372f998b4a2f9b074e07e15b554e88b73ffd9" exitCode=0 Jan 23 19:20:58 crc kubenswrapper[4760]: I0123 19:20:58.953687 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" event={"ID":"1cfc53b2-145a-44bd-8650-21de9e160a06","Type":"ContainerDied","Data":"a5bc20a102a32478252bd7a3073372f998b4a2f9b074e07e15b554e88b73ffd9"} Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.066827 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.106059 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rgpk7/crc-debug-8g6j6"] Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.109046 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rgpk7/crc-debug-8g6j6"] Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.165399 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cfc53b2-145a-44bd-8650-21de9e160a06-host\") pod \"1cfc53b2-145a-44bd-8650-21de9e160a06\" (UID: \"1cfc53b2-145a-44bd-8650-21de9e160a06\") " Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.165469 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cfc53b2-145a-44bd-8650-21de9e160a06-host" (OuterVolumeSpecName: "host") pod "1cfc53b2-145a-44bd-8650-21de9e160a06" (UID: "1cfc53b2-145a-44bd-8650-21de9e160a06"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.165481 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp49r\" (UniqueName: \"kubernetes.io/projected/1cfc53b2-145a-44bd-8650-21de9e160a06-kube-api-access-mp49r\") pod \"1cfc53b2-145a-44bd-8650-21de9e160a06\" (UID: \"1cfc53b2-145a-44bd-8650-21de9e160a06\") " Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.166442 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cfc53b2-145a-44bd-8650-21de9e160a06-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.173720 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfc53b2-145a-44bd-8650-21de9e160a06-kube-api-access-mp49r" (OuterVolumeSpecName: "kube-api-access-mp49r") pod "1cfc53b2-145a-44bd-8650-21de9e160a06" (UID: "1cfc53b2-145a-44bd-8650-21de9e160a06"). InnerVolumeSpecName "kube-api-access-mp49r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.268202 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp49r\" (UniqueName: \"kubernetes.io/projected/1cfc53b2-145a-44bd-8650-21de9e160a06-kube-api-access-mp49r\") on node \"crc\" DevicePath \"\"" Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.973942 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37dfd0f16b89e23816661847a48fc38997e4b18f09835c7d0cd481702bf5cefb" Jan 23 19:21:00 crc kubenswrapper[4760]: I0123 19:21:00.974043 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-8g6j6" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.278589 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rgpk7/crc-debug-xnnvh"] Jan 23 19:21:01 crc kubenswrapper[4760]: E0123 19:21:01.279224 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfc53b2-145a-44bd-8650-21de9e160a06" containerName="container-00" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.279236 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfc53b2-145a-44bd-8650-21de9e160a06" containerName="container-00" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.279456 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cfc53b2-145a-44bd-8650-21de9e160a06" containerName="container-00" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.280052 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.285598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-host\") pod \"crc-debug-xnnvh\" (UID: \"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48\") " pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.285654 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fvf\" (UniqueName: \"kubernetes.io/projected/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-kube-api-access-75fvf\") pod \"crc-debug-xnnvh\" (UID: \"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48\") " pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.387212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-host\") pod \"crc-debug-xnnvh\" (UID: \"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48\") " pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.387281 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75fvf\" (UniqueName: \"kubernetes.io/projected/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-kube-api-access-75fvf\") pod \"crc-debug-xnnvh\" (UID: \"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48\") " pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.387368 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-host\") pod \"crc-debug-xnnvh\" (UID: \"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48\") " pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.411458 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75fvf\" (UniqueName: \"kubernetes.io/projected/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-kube-api-access-75fvf\") pod \"crc-debug-xnnvh\" (UID: \"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48\") " pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.598476 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.605734 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cfc53b2-145a-44bd-8650-21de9e160a06" path="/var/lib/kubelet/pods/1cfc53b2-145a-44bd-8650-21de9e160a06/volumes" Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.982781 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" event={"ID":"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48","Type":"ContainerStarted","Data":"91af40cc6f347e7a30710f324ffb0d95ef7022bad96d1fcc3ece4846dcb755c2"} Jan 23 19:21:01 crc kubenswrapper[4760]: I0123 19:21:01.983111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" event={"ID":"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48","Type":"ContainerStarted","Data":"ac752a428b007adec9898aef1b8fde3e07d92504b1d3740dfcafe8ade53ff081"} Jan 23 19:21:02 crc kubenswrapper[4760]: I0123 19:21:02.992494 4760 generic.go:334] "Generic (PLEG): container finished" podID="b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48" containerID="91af40cc6f347e7a30710f324ffb0d95ef7022bad96d1fcc3ece4846dcb755c2" exitCode=0 Jan 23 19:21:02 crc kubenswrapper[4760]: I0123 19:21:02.992622 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" event={"ID":"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48","Type":"ContainerDied","Data":"91af40cc6f347e7a30710f324ffb0d95ef7022bad96d1fcc3ece4846dcb755c2"} Jan 23 19:21:04 crc kubenswrapper[4760]: I0123 19:21:04.116351 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" Jan 23 19:21:04 crc kubenswrapper[4760]: I0123 19:21:04.142294 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-host\") pod \"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48\" (UID: \"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48\") " Jan 23 19:21:04 crc kubenswrapper[4760]: I0123 19:21:04.142356 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75fvf\" (UniqueName: \"kubernetes.io/projected/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-kube-api-access-75fvf\") pod \"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48\" (UID: \"b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48\") " Jan 23 19:21:04 crc kubenswrapper[4760]: I0123 19:21:04.142927 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-host" (OuterVolumeSpecName: "host") pod "b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48" (UID: "b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:21:04 crc kubenswrapper[4760]: I0123 19:21:04.159742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-kube-api-access-75fvf" (OuterVolumeSpecName: "kube-api-access-75fvf") pod "b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48" (UID: "b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48"). InnerVolumeSpecName "kube-api-access-75fvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:21:04 crc kubenswrapper[4760]: I0123 19:21:04.244460 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:21:04 crc kubenswrapper[4760]: I0123 19:21:04.244495 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75fvf\" (UniqueName: \"kubernetes.io/projected/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48-kube-api-access-75fvf\") on node \"crc\" DevicePath \"\"" Jan 23 19:21:04 crc kubenswrapper[4760]: I0123 19:21:04.741733 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rgpk7/crc-debug-xnnvh"] Jan 23 19:21:04 crc kubenswrapper[4760]: I0123 19:21:04.750704 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rgpk7/crc-debug-xnnvh"] Jan 23 19:21:05 crc kubenswrapper[4760]: I0123 19:21:05.010224 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac752a428b007adec9898aef1b8fde3e07d92504b1d3740dfcafe8ade53ff081" Jan 23 19:21:05 crc kubenswrapper[4760]: I0123 19:21:05.010271 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-xnnvh" Jan 23 19:21:05 crc kubenswrapper[4760]: I0123 19:21:05.605304 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48" path="/var/lib/kubelet/pods/b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48/volumes" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.335470 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rgpk7/crc-debug-2sphj"] Jan 23 19:21:06 crc kubenswrapper[4760]: E0123 19:21:06.335858 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48" containerName="container-00" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.335882 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48" containerName="container-00" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.336102 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f8bfc3-b8a6-4a91-be17-1ee7d4eabd48" containerName="container-00" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.336847 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-2sphj" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.491836 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb2937f8-750b-410e-a197-28c8b02d4dd4-host\") pod \"crc-debug-2sphj\" (UID: \"fb2937f8-750b-410e-a197-28c8b02d4dd4\") " pod="openshift-must-gather-rgpk7/crc-debug-2sphj" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.491974 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvhvq\" (UniqueName: \"kubernetes.io/projected/fb2937f8-750b-410e-a197-28c8b02d4dd4-kube-api-access-xvhvq\") pod \"crc-debug-2sphj\" (UID: \"fb2937f8-750b-410e-a197-28c8b02d4dd4\") " pod="openshift-must-gather-rgpk7/crc-debug-2sphj" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.595238 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb2937f8-750b-410e-a197-28c8b02d4dd4-host\") pod \"crc-debug-2sphj\" (UID: \"fb2937f8-750b-410e-a197-28c8b02d4dd4\") " pod="openshift-must-gather-rgpk7/crc-debug-2sphj" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.595321 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb2937f8-750b-410e-a197-28c8b02d4dd4-host\") pod \"crc-debug-2sphj\" (UID: \"fb2937f8-750b-410e-a197-28c8b02d4dd4\") " pod="openshift-must-gather-rgpk7/crc-debug-2sphj" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.595701 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvhvq\" (UniqueName: \"kubernetes.io/projected/fb2937f8-750b-410e-a197-28c8b02d4dd4-kube-api-access-xvhvq\") pod \"crc-debug-2sphj\" (UID: \"fb2937f8-750b-410e-a197-28c8b02d4dd4\") " pod="openshift-must-gather-rgpk7/crc-debug-2sphj" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.876077 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvhvq\" (UniqueName: \"kubernetes.io/projected/fb2937f8-750b-410e-a197-28c8b02d4dd4-kube-api-access-xvhvq\") pod \"crc-debug-2sphj\" (UID: \"fb2937f8-750b-410e-a197-28c8b02d4dd4\") " pod="openshift-must-gather-rgpk7/crc-debug-2sphj" Jan 23 19:21:06 crc kubenswrapper[4760]: I0123 19:21:06.964729 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-2sphj" Jan 23 19:21:07 crc kubenswrapper[4760]: I0123 19:21:07.026645 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/crc-debug-2sphj" event={"ID":"fb2937f8-750b-410e-a197-28c8b02d4dd4","Type":"ContainerStarted","Data":"18da369c2cdd346063e7af3dd1c776c06077f4e7aa1d4ef3237cf61a62e121a2"} Jan 23 19:21:08 crc kubenswrapper[4760]: I0123 19:21:08.037397 4760 generic.go:334] "Generic (PLEG): container finished" podID="fb2937f8-750b-410e-a197-28c8b02d4dd4" containerID="150ad2a75a92d8f6f5c37f7870049f570ef71c875b0db6d39af35f2df11fd8ad" exitCode=0 Jan 23 19:21:08 crc kubenswrapper[4760]: I0123 19:21:08.037468 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/crc-debug-2sphj" event={"ID":"fb2937f8-750b-410e-a197-28c8b02d4dd4","Type":"ContainerDied","Data":"150ad2a75a92d8f6f5c37f7870049f570ef71c875b0db6d39af35f2df11fd8ad"} Jan 23 19:21:08 crc kubenswrapper[4760]: I0123 19:21:08.083640 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rgpk7/crc-debug-2sphj"] Jan 23 19:21:08 crc kubenswrapper[4760]: I0123 19:21:08.092491 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rgpk7/crc-debug-2sphj"] Jan 23 19:21:09 crc kubenswrapper[4760]: I0123 19:21:09.147204 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-2sphj" Jan 23 19:21:09 crc kubenswrapper[4760]: I0123 19:21:09.242900 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb2937f8-750b-410e-a197-28c8b02d4dd4-host\") pod \"fb2937f8-750b-410e-a197-28c8b02d4dd4\" (UID: \"fb2937f8-750b-410e-a197-28c8b02d4dd4\") " Jan 23 19:21:09 crc kubenswrapper[4760]: I0123 19:21:09.242990 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvhvq\" (UniqueName: \"kubernetes.io/projected/fb2937f8-750b-410e-a197-28c8b02d4dd4-kube-api-access-xvhvq\") pod \"fb2937f8-750b-410e-a197-28c8b02d4dd4\" (UID: \"fb2937f8-750b-410e-a197-28c8b02d4dd4\") " Jan 23 19:21:09 crc kubenswrapper[4760]: I0123 19:21:09.243042 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb2937f8-750b-410e-a197-28c8b02d4dd4-host" (OuterVolumeSpecName: "host") pod "fb2937f8-750b-410e-a197-28c8b02d4dd4" (UID: "fb2937f8-750b-410e-a197-28c8b02d4dd4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:21:09 crc kubenswrapper[4760]: I0123 19:21:09.243497 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb2937f8-750b-410e-a197-28c8b02d4dd4-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:21:09 crc kubenswrapper[4760]: I0123 19:21:09.251287 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2937f8-750b-410e-a197-28c8b02d4dd4-kube-api-access-xvhvq" (OuterVolumeSpecName: "kube-api-access-xvhvq") pod "fb2937f8-750b-410e-a197-28c8b02d4dd4" (UID: "fb2937f8-750b-410e-a197-28c8b02d4dd4"). InnerVolumeSpecName "kube-api-access-xvhvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:21:09 crc kubenswrapper[4760]: I0123 19:21:09.345904 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvhvq\" (UniqueName: \"kubernetes.io/projected/fb2937f8-750b-410e-a197-28c8b02d4dd4-kube-api-access-xvhvq\") on node \"crc\" DevicePath \"\"" Jan 23 19:21:09 crc kubenswrapper[4760]: I0123 19:21:09.614212 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb2937f8-750b-410e-a197-28c8b02d4dd4" path="/var/lib/kubelet/pods/fb2937f8-750b-410e-a197-28c8b02d4dd4/volumes" Jan 23 19:21:10 crc kubenswrapper[4760]: I0123 19:21:10.055009 4760 scope.go:117] "RemoveContainer" containerID="150ad2a75a92d8f6f5c37f7870049f570ef71c875b0db6d39af35f2df11fd8ad" Jan 23 19:21:10 crc kubenswrapper[4760]: I0123 19:21:10.055055 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/crc-debug-2sphj" Jan 23 19:22:03 crc kubenswrapper[4760]: I0123 19:22:03.768447 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-664c7d54bb-bxtwt_78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986/barbican-api/0.log" Jan 23 19:22:03 crc kubenswrapper[4760]: I0123 19:22:03.938617 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-664c7d54bb-bxtwt_78e5abdd-a7e1-49fd-b8c6-4ee3f2c0a986/barbican-api-log/0.log" Jan 23 19:22:03 crc kubenswrapper[4760]: I0123 19:22:03.958617 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86b58c4bfd-xlh2d_f5e3da3f-c7fe-4735-a284-f35a50c46d2b/barbican-keystone-listener/0.log" Jan 23 19:22:04 crc kubenswrapper[4760]: I0123 19:22:04.214871 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-696f8c69dc-tcwp6_e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5/barbican-worker/0.log" Jan 23 19:22:04 crc kubenswrapper[4760]: I0123 19:22:04.254079 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-696f8c69dc-tcwp6_e8cc68fb-ebfd-4d1c-80a3-7e5826d7f2f5/barbican-worker-log/0.log" Jan 23 19:22:04 crc kubenswrapper[4760]: I0123 19:22:04.257982 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86b58c4bfd-xlh2d_f5e3da3f-c7fe-4735-a284-f35a50c46d2b/barbican-keystone-listener-log/0.log" Jan 23 19:22:04 crc kubenswrapper[4760]: I0123 19:22:04.430607 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nv9w2_ec5b08e0-3bdf-46d0-9f32-ffdfdae26f61/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:04 crc kubenswrapper[4760]: I0123 19:22:04.469664 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f870c7cc-bcbe-4101-9a86-8a190e20cef2/ceilometer-central-agent/0.log" Jan 23 19:22:04 crc kubenswrapper[4760]: I0123 19:22:04.620018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f870c7cc-bcbe-4101-9a86-8a190e20cef2/ceilometer-notification-agent/0.log" Jan 23 19:22:04 crc kubenswrapper[4760]: I0123 19:22:04.640740 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f870c7cc-bcbe-4101-9a86-8a190e20cef2/sg-core/0.log" Jan 23 19:22:04 crc kubenswrapper[4760]: I0123 19:22:04.657609 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f870c7cc-bcbe-4101-9a86-8a190e20cef2/proxy-httpd/0.log" Jan 23 19:22:04 crc kubenswrapper[4760]: I0123 19:22:04.815737 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-bqf72_9d2d6784-f9bf-48c1-95a9-0ba4167059a6/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:04 crc kubenswrapper[4760]: I0123 19:22:04.850938 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-989bg_6d4d5e59-3fcb-46e5-90d1-4bd34ee87e80/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:05 crc kubenswrapper[4760]: I0123 19:22:05.079960 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f611f27e-a46b-40f8-ad28-a32d1dfa1149/cinder-api-log/0.log" Jan 23 19:22:05 crc kubenswrapper[4760]: I0123 19:22:05.089013 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f611f27e-a46b-40f8-ad28-a32d1dfa1149/cinder-api/0.log" Jan 23 19:22:05 crc kubenswrapper[4760]: I0123 19:22:05.336623 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_34fe3286-04be-40d9-a398-86c54b9025f1/probe/0.log" Jan 23 19:22:05 crc kubenswrapper[4760]: I0123 19:22:05.451783 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b3b1a53e-ed5e-43c1-aa57-e0e829359103/cinder-scheduler/0.log" Jan 23 19:22:05 crc kubenswrapper[4760]: I0123 19:22:05.455652 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_34fe3286-04be-40d9-a398-86c54b9025f1/cinder-backup/0.log" Jan 23 19:22:05 crc kubenswrapper[4760]: I0123 19:22:05.584933 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b3b1a53e-ed5e-43c1-aa57-e0e829359103/probe/0.log" Jan 23 19:22:05 crc kubenswrapper[4760]: I0123 19:22:05.731831 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_dd897463-2b70-4dcd-9a51-442771b77ff4/cinder-volume/0.log" Jan 23 19:22:05 crc kubenswrapper[4760]: I0123 19:22:05.761684 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_dd897463-2b70-4dcd-9a51-442771b77ff4/probe/0.log" Jan 23 19:22:05 crc kubenswrapper[4760]: I0123 19:22:05.869062 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wk2ds_59cea3b6-c76e-4cca-9e9f-15bdeab71c63/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:05 crc kubenswrapper[4760]: I0123 19:22:05.946912 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-p8zd5_74d60425-0689-4af1-b745-22453031dcfe/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:06 crc kubenswrapper[4760]: I0123 19:22:06.152017 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-px9rg_a52c0919-287b-48b2-83f6-9bc4fb33eaa6/init/0.log" Jan 23 19:22:06 crc kubenswrapper[4760]: I0123 19:22:06.389730 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-px9rg_a52c0919-287b-48b2-83f6-9bc4fb33eaa6/init/0.log" Jan 23 19:22:06 crc kubenswrapper[4760]: I0123 19:22:06.428892 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-px9rg_a52c0919-287b-48b2-83f6-9bc4fb33eaa6/dnsmasq-dns/0.log" Jan 23 19:22:06 crc kubenswrapper[4760]: I0123 19:22:06.484689 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_058e59c7-9277-4925-810f-105817254775/glance-httpd/0.log" Jan 23 19:22:06 crc kubenswrapper[4760]: I0123 19:22:06.551956 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_058e59c7-9277-4925-810f-105817254775/glance-log/0.log" Jan 23 19:22:06 crc kubenswrapper[4760]: I0123 19:22:06.643207 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9/glance-httpd/0.log" Jan 23 19:22:06 crc kubenswrapper[4760]: I0123 19:22:06.732045 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cba2ed50-5dc9-42b4-ab4d-e25f894a8ec9/glance-log/0.log" Jan 23 19:22:06 crc kubenswrapper[4760]: I0123 19:22:06.952019 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-559467fcc6-pxz2z_fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6/horizon/0.log" Jan 23 19:22:07 crc kubenswrapper[4760]: I0123 19:22:07.048870 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-p8tdk_826eb339-b444-455e-b66e-f0e3fa00753d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:07 crc kubenswrapper[4760]: I0123 19:22:07.067741 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-559467fcc6-pxz2z_fdf60fb7-bbc2-41ed-92ca-f8970e75b4a6/horizon-log/0.log" Jan 23 19:22:07 crc kubenswrapper[4760]: I0123 19:22:07.205604 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-m9jvz_268fb02f-f216-4953-9868-e7b1d27448f2/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:07 crc kubenswrapper[4760]: I0123 19:22:07.400100 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486581-qdnbh_f163dd36-7bd7-4821-9874-8eb534d2c03d/keystone-cron/0.log" Jan 23 19:22:07 crc kubenswrapper[4760]: I0123 19:22:07.619218 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2ae37337-d347-4c76-ab83-43463ab30c29/kube-state-metrics/0.log" Jan 23 19:22:07 crc kubenswrapper[4760]: I0123 19:22:07.829919 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-md5ql_538bf016-5ed3-44cd-bcf0-f59c56e01048/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:08 crc kubenswrapper[4760]: I0123 19:22:08.289127 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74944c68c4-mnfbr_5201f9c2-1e25-4192-8bd6-2e0fb4a5b902/keystone-api/0.log" Jan 23 19:22:08 crc kubenswrapper[4760]: I0123 19:22:08.302776 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d8d48def-f1d3-47de-9724-65f5d7f0d47a/probe/0.log" Jan 23 19:22:08 crc kubenswrapper[4760]: I0123 19:22:08.423455 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d8d48def-f1d3-47de-9724-65f5d7f0d47a/manila-scheduler/0.log" Jan 23 19:22:08 crc kubenswrapper[4760]: I0123 19:22:08.457267 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0c64c390-9956-4595-b1b9-9bf78be32e68/manila-api/0.log" Jan 23 19:22:08 crc kubenswrapper[4760]: I0123 19:22:08.722235 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_412f9ad2-6b73-4af0-bd6e-66a697eb20ba/probe/0.log" Jan 23 19:22:08 crc kubenswrapper[4760]: I0123 19:22:08.954037 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_412f9ad2-6b73-4af0-bd6e-66a697eb20ba/manila-share/0.log" Jan 23 19:22:08 crc kubenswrapper[4760]: I0123 19:22:08.979102 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0c64c390-9956-4595-b1b9-9bf78be32e68/manila-api-log/0.log" Jan 23 19:22:09 crc kubenswrapper[4760]: I0123 19:22:09.060857 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-585856f577-q8bpp_789429a2-8a44-4914-b54c-65e7ccaa180c/neutron-api/0.log" Jan 23 19:22:09 crc kubenswrapper[4760]: I0123 19:22:09.250736 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-585856f577-q8bpp_789429a2-8a44-4914-b54c-65e7ccaa180c/neutron-httpd/0.log" Jan 23 19:22:09 crc kubenswrapper[4760]: I0123 19:22:09.267458 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xd5wx_836b1ef8-b075-4321-9f13-18120bc8d010/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:09 crc kubenswrapper[4760]: I0123 19:22:09.755240 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ddef27ef-a5e0-4045-9024-6710d89f194a/nova-api-log/0.log" Jan 23 19:22:09 crc kubenswrapper[4760]: I0123 19:22:09.813267 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_12e7c803-ac52-4cb2-b29e-14973d73c522/nova-cell0-conductor-conductor/0.log" Jan 23 19:22:10 crc kubenswrapper[4760]: I0123 19:22:10.169041 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ddef27ef-a5e0-4045-9024-6710d89f194a/nova-api-api/0.log" Jan 23 19:22:10 crc kubenswrapper[4760]: I0123 19:22:10.515463 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_678d1978-a829-44dd-9030-3026a9f170b0/nova-cell1-conductor-conductor/0.log" Jan 23 19:22:10 crc kubenswrapper[4760]: I0123 19:22:10.515904 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6b36e62e-e56c-4619-a885-dc26d824c2ed/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 19:22:10 crc kubenswrapper[4760]: I0123 19:22:10.666695 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-55n44_a7f5467d-783f-4be7-a149-ea8b97bcf468/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:10 crc kubenswrapper[4760]: I0123 19:22:10.829507 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e938088-0a8b-43ef-8e83-e752649de48d/nova-metadata-log/0.log" Jan 23 19:22:11 crc kubenswrapper[4760]: I0123 19:22:11.135953 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c41fcdb0-57f0-4045-948f-16e9f075ae61/mysql-bootstrap/0.log" Jan 23 19:22:11 crc kubenswrapper[4760]: I0123 19:22:11.136642 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ed55273b-51cf-490f-80ec-003cd28fa749/nova-scheduler-scheduler/0.log" Jan 23 19:22:11 crc kubenswrapper[4760]: I0123 19:22:11.414032 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c41fcdb0-57f0-4045-948f-16e9f075ae61/mysql-bootstrap/0.log" Jan 23 19:22:11 crc kubenswrapper[4760]: I0123 19:22:11.427044 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c41fcdb0-57f0-4045-948f-16e9f075ae61/galera/0.log" Jan 23 19:22:11 crc kubenswrapper[4760]: I0123 19:22:11.638092 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad/mysql-bootstrap/0.log" Jan 23 19:22:12 crc kubenswrapper[4760]: I0123 19:22:12.318717 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad/mysql-bootstrap/0.log" Jan 23 19:22:12 crc kubenswrapper[4760]: I0123 19:22:12.323781 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb7df56d-a417-4f3e-8f53-8ec91ab9d4ad/galera/0.log" Jan 23 19:22:12 crc kubenswrapper[4760]: I0123 19:22:12.744228 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b41e1f55-3448-4112-8aca-c5c2d6018310/openstackclient/0.log" Jan 23 19:22:12 crc kubenswrapper[4760]: I0123 19:22:12.771363 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2wpph_ea0533e4-88c1-4a03-93a9-f0680acaafc5/ovn-controller/0.log" Jan 23 19:22:12 crc kubenswrapper[4760]: I0123 19:22:12.778112 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e938088-0a8b-43ef-8e83-e752649de48d/nova-metadata-metadata/0.log" Jan 23 19:22:13 crc kubenswrapper[4760]: I0123 19:22:13.066872 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dc25n_b2c3a21d-ca0d-4af3-8e01-1e7e7bc25212/openstack-network-exporter/0.log" Jan 23 19:22:13 crc kubenswrapper[4760]: I0123 19:22:13.127440 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lf5j_eef24537-281d-489c-b15b-5610cfc62b32/ovsdb-server-init/0.log" Jan 23 19:22:13 crc kubenswrapper[4760]: I0123 19:22:13.327234 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lf5j_eef24537-281d-489c-b15b-5610cfc62b32/ovsdb-server-init/0.log" Jan 23 19:22:13 crc kubenswrapper[4760]: I0123 19:22:13.385753 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lf5j_eef24537-281d-489c-b15b-5610cfc62b32/ovs-vswitchd/0.log" Jan 23 19:22:13 crc kubenswrapper[4760]: I0123 19:22:13.389515 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lf5j_eef24537-281d-489c-b15b-5610cfc62b32/ovsdb-server/0.log" Jan 23 19:22:13 crc kubenswrapper[4760]: I0123 19:22:13.586955 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_519052b1-de37-42f3-8811-9252e225ad9b/openstack-network-exporter/0.log" Jan 23 19:22:13 crc kubenswrapper[4760]: I0123 19:22:13.650097 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xpd9x_12785b41-cc5b-4404-ac5d-42b24f3046b4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:13 crc kubenswrapper[4760]: I0123 19:22:13.667296 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_519052b1-de37-42f3-8811-9252e225ad9b/ovn-northd/0.log" Jan 23 19:22:13 crc kubenswrapper[4760]: I0123 19:22:13.856526 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c82ab7b9-010c-49aa-b6cc-a654dad56b87/openstack-network-exporter/0.log" Jan 23 19:22:13 crc kubenswrapper[4760]: I0123 19:22:13.867616 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c82ab7b9-010c-49aa-b6cc-a654dad56b87/ovsdbserver-nb/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.084544 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7350c936-08ea-4b64-ae16-a0a7c3241c52/openstack-network-exporter/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.104822 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7350c936-08ea-4b64-ae16-a0a7c3241c52/ovsdbserver-sb/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.285875 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-66645c546d-bcr2r_f40c8acd-aef7-4575-bf9b-18a4e220b34b/placement-api/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.374858 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-66645c546d-bcr2r_f40c8acd-aef7-4575-bf9b-18a4e220b34b/placement-log/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.383153 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_108fc09d-d5b2-41bc-b2dd-f2edb1847366/setup-container/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.616337 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_108fc09d-d5b2-41bc-b2dd-f2edb1847366/setup-container/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.667820 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_108fc09d-d5b2-41bc-b2dd-f2edb1847366/rabbitmq/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.678356 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1c1aa6a7-0392-4091-b65f-69e5e224288c/setup-container/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.869338 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1c1aa6a7-0392-4091-b65f-69e5e224288c/setup-container/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.969049 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1c1aa6a7-0392-4091-b65f-69e5e224288c/rabbitmq/0.log" Jan 23 19:22:14 crc kubenswrapper[4760]: I0123 19:22:14.981665 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-9mpmw_85969cec-7a43-4aed-9ec1-522308d222a1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:15 crc kubenswrapper[4760]: I0123 19:22:15.213752 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2mpj6_c6af47ac-285d-4ee9-8ab6-1aa3d98d3927/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:15 crc kubenswrapper[4760]: I0123 19:22:15.227802 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-wgxkm_b5489ba9-2339-49ff-b4b1-5ac088f89e85/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:15 crc kubenswrapper[4760]: I0123 19:22:15.489259 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pr2nd_05548e64-64a8-42d3-8611-6b10492801d6/ssh-known-hosts-edpm-deployment/0.log" Jan 23 19:22:15 crc kubenswrapper[4760]: I0123 19:22:15.513299 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6b58d560-8084-471f-a385-c36ce2d28bd8/tempest-tests-tempest-tests-runner/0.log" Jan 23 19:22:15 crc kubenswrapper[4760]: I0123 19:22:15.777139 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_731cb86c-5b1c-4f47-843a-bd70bc4656d3/test-operator-logs-container/0.log" Jan 23 19:22:15 crc kubenswrapper[4760]: I0123 19:22:15.870151 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5xmz6_4824cd7d-8d66-48ac-bf98-f7f4ee516458/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:22:16 crc kubenswrapper[4760]: I0123 19:22:16.075277 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:22:16 crc kubenswrapper[4760]: I0123 19:22:16.075340 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:22:34 crc kubenswrapper[4760]: I0123 19:22:34.887841 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0db24b5a-b078-42ca-b3ef-4abf3cf33531/memcached/0.log" Jan 23 19:22:46 crc kubenswrapper[4760]: I0123 19:22:46.076171 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:22:46 crc kubenswrapper[4760]: I0123 19:22:46.076652 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:22:46 crc kubenswrapper[4760]: I0123 19:22:46.316343 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-5g86q_3648750a-24fe-4391-8921-66d791485e98/manager/0.log" Jan 23 19:22:46 crc kubenswrapper[4760]: I0123 19:22:46.493599 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/util/0.log" Jan 23 19:22:46 crc kubenswrapper[4760]: I0123 19:22:46.645751 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/pull/0.log" Jan 23 19:22:46 crc kubenswrapper[4760]: I0123 19:22:46.676013 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/util/0.log" Jan 23 19:22:46 crc kubenswrapper[4760]: I0123 19:22:46.701361 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/pull/0.log" Jan 23 19:22:46 crc kubenswrapper[4760]: I0123 19:22:46.839250 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/util/0.log" Jan 23 19:22:46 crc kubenswrapper[4760]: I0123 19:22:46.906489 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/extract/0.log" Jan 23 19:22:46 crc kubenswrapper[4760]: I0123 19:22:46.906849 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6d6a49e2804cb4ad09edfba48d176e25d0605882833288047ae8dd9832jqb7_f45d762a-3d92-4bd5-8e93-6a940ef50517/pull/0.log" Jan 23 19:22:47 crc kubenswrapper[4760]: I0123 19:22:47.095373 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-cql54_fdb3af86-9ecd-45de-8f76-976ff884b581/manager/0.log" Jan 23 19:22:47 crc kubenswrapper[4760]: I0123 19:22:47.114341 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-j9vqr_d0989ccd-5163-46a0-b578-975ba1c31f03/manager/0.log" Jan 23 19:22:47 crc kubenswrapper[4760]: I0123 19:22:47.328444 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-ljpl4_0d449643-d693-4591-a0d6-42e8129a3468/manager/0.log" Jan 23 19:22:47 crc kubenswrapper[4760]: I0123 19:22:47.451183 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-pjz6t_ef671ec0-a50e-4acd-bd63-31aa36cf3033/manager/0.log" Jan 23 19:22:47 crc kubenswrapper[4760]: I0123 19:22:47.513446 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-8w8lt_fcc9617c-e7aa-4707-bcaf-1492e3e0fee6/manager/0.log" Jan 23 19:22:47 crc kubenswrapper[4760]: I0123 19:22:47.730079 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-vvcd8_58df2b6d-bc85-4266-bc2c-143cd52efc28/manager/0.log" Jan 23 19:22:47 crc kubenswrapper[4760]: I0123 19:22:47.889352 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-cxqww_f56403a2-dc6e-4362-99c2-669531fd3d8d/manager/0.log" Jan 23 19:22:47 crc kubenswrapper[4760]: I0123 19:22:47.956726 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-qf78h_4d84645c-b378-4acd-a3e5-638c61a3b709/manager/0.log" Jan 23 19:22:48 crc kubenswrapper[4760]: I0123 19:22:48.108040 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7758cc4469-bczdt_8a1115aa-5fc1-4dc1-8752-7d15f984837b/manager/0.log" Jan 23 19:22:48 crc kubenswrapper[4760]: I0123 19:22:48.174554 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-tlmlf_285e41c1-c4f8-4978-9a78-ca8d88b45f29/manager/0.log" Jan 23 19:22:48 crc kubenswrapper[4760]: I0123 19:22:48.375619 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-jpt62_b96abc36-760b-4dfb-bc01-80872c59c059/manager/0.log" Jan 23 19:22:48 crc kubenswrapper[4760]: I0123 19:22:48.470131 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-jq9l4_587a0d90-d644-4501-bc83-ef454dc4b3d9/manager/0.log" Jan 23 19:22:48 crc kubenswrapper[4760]: I0123 19:22:48.584130 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-ngqsw_89d52854-e7b7-4eba-b990-49a971674ab5/manager/0.log" Jan 23 19:22:48 crc kubenswrapper[4760]: I0123 19:22:48.641227 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854hqwzm_ed6619a3-ea05-44ae-880e-c9ba87fb93f9/manager/0.log" Jan 23 19:22:48 crc kubenswrapper[4760]: I0123 19:22:48.919002 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-ff567b4f8-wdx4z_e9a6d033-9989-4ea2-a4c1-734f2baa1828/operator/0.log" Jan 23 19:22:49 crc kubenswrapper[4760]: I0123 19:22:49.045742 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4cxm9_572066a4-717d-4bc0-8ef4-146bd33c3768/registry-server/0.log" Jan 23 19:22:49 crc kubenswrapper[4760]: I0123 19:22:49.271766 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-tbdng_bb6317fe-84f6-4921-9286-6b1aadd6d038/manager/0.log" Jan 23 19:22:49 crc kubenswrapper[4760]: I0123 19:22:49.397912 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-lzv66_0dab320d-061f-43f2-9e57-1c94b958522a/manager/0.log" Jan 23 19:22:49 crc kubenswrapper[4760]: I0123 19:22:49.503561 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-j24jt_ec704d93-0ca4-4d63-a123-dbb5a62bffed/operator/0.log" Jan 23 19:22:49 crc kubenswrapper[4760]: I0123 19:22:49.710741 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-kmclp_88c1fb15-33fa-40cf-afa9-068d281bbed5/manager/0.log" Jan 23 19:22:50 crc kubenswrapper[4760]: I0123 19:22:50.006248 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-7bc5b_9d6049ab-b6ff-41e2-8e37-f3c2102d5ab0/manager/0.log" Jan 23 19:22:50 crc kubenswrapper[4760]: I0123 19:22:50.081832 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7555664f8b-7kpfz_f5c3fafa-733d-4719-89f5-afd3c885919e/manager/0.log" Jan 23 19:22:50 crc kubenswrapper[4760]: I0123 19:22:50.092624 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-nxws7_78a244f9-feb4-4df5-b5ec-7bb09185e655/manager/0.log" Jan 23 19:22:50 crc kubenswrapper[4760]: I0123 19:22:50.220946 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-r2pws_807135e7-1ace-4928-be9b-82b8a58464fe/manager/0.log" Jan 23 19:23:10 crc kubenswrapper[4760]: I0123 19:23:10.030859 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ffvsn_82314a42-a08b-4561-b24a-71e715d5d37f/control-plane-machine-set-operator/0.log" Jan 23 19:23:10 crc kubenswrapper[4760]: I0123 19:23:10.274018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qkh24_c08fd2c2-1700-4296-a369-62c3c9928a63/machine-api-operator/0.log" Jan 23 19:23:10 crc kubenswrapper[4760]: I0123 19:23:10.290477 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qkh24_c08fd2c2-1700-4296-a369-62c3c9928a63/kube-rbac-proxy/0.log" Jan 23 19:23:16 crc kubenswrapper[4760]: I0123 19:23:16.075624 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:23:16 crc kubenswrapper[4760]: I0123 19:23:16.076114 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:23:16 crc kubenswrapper[4760]: I0123 19:23:16.076167 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 19:23:16 crc kubenswrapper[4760]: I0123 19:23:16.077445 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b05418f9558272a495fa54d74832f4d63d76f79b825371eb93640844261e899f"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:23:16 crc kubenswrapper[4760]: I0123 19:23:16.077530 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://b05418f9558272a495fa54d74832f4d63d76f79b825371eb93640844261e899f" gracePeriod=600 Jan 23 19:23:17 crc kubenswrapper[4760]: I0123 19:23:17.187068 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="b05418f9558272a495fa54d74832f4d63d76f79b825371eb93640844261e899f" exitCode=0 Jan 23 19:23:17 crc kubenswrapper[4760]: I0123 19:23:17.187142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"b05418f9558272a495fa54d74832f4d63d76f79b825371eb93640844261e899f"} Jan 23 19:23:17 crc kubenswrapper[4760]: I0123 19:23:17.187714 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerStarted","Data":"e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4"} Jan 23 19:23:17 crc kubenswrapper[4760]: I0123 19:23:17.187745 4760 scope.go:117] "RemoveContainer" containerID="b44cccd95c7e671482c2fac320774ee0f71928e61c0478f052de4ffb3d47831d" Jan 23 19:23:23 crc kubenswrapper[4760]: I0123 19:23:23.980770 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-pnwww_9ef58368-3cff-49ff-8dfd-17ae3ff9e710/cert-manager-controller/0.log" Jan 23 19:23:24 crc kubenswrapper[4760]: I0123 19:23:24.619092 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-f4cdr_8eed5962-6318-49b8-82a5-7f10b629d81c/cert-manager-webhook/0.log" Jan 23 19:23:24 crc kubenswrapper[4760]: I0123 19:23:24.654526 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-t58c6_0a7dda60-1788-4458-b1c4-fa4ecfd723a2/cert-manager-cainjector/0.log" Jan 23 19:23:38 crc kubenswrapper[4760]: I0123 19:23:38.081397 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-qlk28_45b6025c-fe75-4723-8f72-7ef9d4414827/nmstate-console-plugin/0.log" Jan 23 19:23:38 crc kubenswrapper[4760]: I0123 19:23:38.256864 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-j7k9r_0627471a-680e-425a-a2de-e4e8d1b4e956/nmstate-handler/0.log" Jan 23 19:23:38 crc kubenswrapper[4760]: I0123 19:23:38.312962 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rj872_02fddd4e-8b61-4c07-b08d-f8ab8a2799ba/kube-rbac-proxy/0.log" Jan 23 19:23:38 crc kubenswrapper[4760]: I0123 19:23:38.345069 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rj872_02fddd4e-8b61-4c07-b08d-f8ab8a2799ba/nmstate-metrics/0.log" Jan 23 19:23:38 crc kubenswrapper[4760]: I0123 19:23:38.486068 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-cnl5v_7cffdf90-9546-4761-bb9d-c4c6da9dffa7/nmstate-operator/0.log" Jan 23 19:23:38 crc kubenswrapper[4760]: I0123 19:23:38.593134 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-rbsvp_5d6ac3e7-df4e-4ebc-981e-fc737f3a55ea/nmstate-webhook/0.log" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.684147 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rtsjn"] Jan 23 19:23:45 crc kubenswrapper[4760]: E0123 19:23:45.688766 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2937f8-750b-410e-a197-28c8b02d4dd4" containerName="container-00" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.688813 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2937f8-750b-410e-a197-28c8b02d4dd4" containerName="container-00" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.689139 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2937f8-750b-410e-a197-28c8b02d4dd4" containerName="container-00" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.691479 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.718904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-utilities\") pod \"certified-operators-rtsjn\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.718965 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkswd\" (UniqueName: \"kubernetes.io/projected/afa16750-a263-41ac-b6ba-92c602d2f9e7-kube-api-access-mkswd\") pod \"certified-operators-rtsjn\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.719095 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-catalog-content\") pod \"certified-operators-rtsjn\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.719875 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rtsjn"] Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.820461 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-catalog-content\") pod \"certified-operators-rtsjn\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.820877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-utilities\") pod \"certified-operators-rtsjn\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.820955 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkswd\" (UniqueName: \"kubernetes.io/projected/afa16750-a263-41ac-b6ba-92c602d2f9e7-kube-api-access-mkswd\") pod \"certified-operators-rtsjn\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.821285 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-utilities\") pod \"certified-operators-rtsjn\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.821636 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-catalog-content\") pod \"certified-operators-rtsjn\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:45 crc kubenswrapper[4760]: I0123 19:23:45.852457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkswd\" (UniqueName: \"kubernetes.io/projected/afa16750-a263-41ac-b6ba-92c602d2f9e7-kube-api-access-mkswd\") pod \"certified-operators-rtsjn\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:46 crc kubenswrapper[4760]: I0123 19:23:46.019166 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:46 crc kubenswrapper[4760]: I0123 19:23:46.631324 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rtsjn"] Jan 23 19:23:47 crc kubenswrapper[4760]: I0123 19:23:47.439579 4760 generic.go:334] "Generic (PLEG): container finished" podID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerID="f4fe41049c994ded73620d5dd53147630e683a24f4d7c00d965ac395d3a12905" exitCode=0 Jan 23 19:23:47 crc kubenswrapper[4760]: I0123 19:23:47.439639 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtsjn" event={"ID":"afa16750-a263-41ac-b6ba-92c602d2f9e7","Type":"ContainerDied","Data":"f4fe41049c994ded73620d5dd53147630e683a24f4d7c00d965ac395d3a12905"} Jan 23 19:23:47 crc kubenswrapper[4760]: I0123 19:23:47.439866 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtsjn" event={"ID":"afa16750-a263-41ac-b6ba-92c602d2f9e7","Type":"ContainerStarted","Data":"6bf5df093a791472ea561abe2396699a4bb1c07d8debf6a49cf0e2b9e175fbdc"} Jan 23 19:23:48 crc kubenswrapper[4760]: I0123 19:23:48.449349 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtsjn" event={"ID":"afa16750-a263-41ac-b6ba-92c602d2f9e7","Type":"ContainerStarted","Data":"317e237c3a285e4512d9b06d9b65b83397010dcbbfa9b8bffd7fb313e199606a"} Jan 23 19:23:49 crc kubenswrapper[4760]: I0123 19:23:49.460343 4760 generic.go:334] "Generic (PLEG): container finished" podID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerID="317e237c3a285e4512d9b06d9b65b83397010dcbbfa9b8bffd7fb313e199606a" exitCode=0 Jan 23 19:23:49 crc kubenswrapper[4760]: I0123 19:23:49.460483 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtsjn" event={"ID":"afa16750-a263-41ac-b6ba-92c602d2f9e7","Type":"ContainerDied","Data":"317e237c3a285e4512d9b06d9b65b83397010dcbbfa9b8bffd7fb313e199606a"} Jan 23 19:23:50 crc kubenswrapper[4760]: I0123 19:23:50.471086 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtsjn" event={"ID":"afa16750-a263-41ac-b6ba-92c602d2f9e7","Type":"ContainerStarted","Data":"f8883d32de03c86c9a797ddd2e66a4f9d2b3721cca238648db121bfb57ea417d"} Jan 23 19:23:50 crc kubenswrapper[4760]: I0123 19:23:50.488314 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rtsjn" podStartSLOduration=3.004927063 podStartE2EDuration="5.488292921s" podCreationTimestamp="2026-01-23 19:23:45 +0000 UTC" firstStartedPulling="2026-01-23 19:23:47.441290519 +0000 UTC m=+4970.443748452" lastFinishedPulling="2026-01-23 19:23:49.924656377 +0000 UTC m=+4972.927114310" observedRunningTime="2026-01-23 19:23:50.486204207 +0000 UTC m=+4973.488662140" watchObservedRunningTime="2026-01-23 19:23:50.488292921 +0000 UTC m=+4973.490750854" Jan 23 19:23:56 crc kubenswrapper[4760]: I0123 19:23:56.020550 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:56 crc kubenswrapper[4760]: I0123 19:23:56.021127 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:56 crc kubenswrapper[4760]: I0123 19:23:56.113954 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:56 crc kubenswrapper[4760]: I0123 19:23:56.565937 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:23:56 crc kubenswrapper[4760]: I0123 19:23:56.613976 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rtsjn"] Jan 23 19:23:58 crc kubenswrapper[4760]: I0123 19:23:58.530028 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rtsjn" podUID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerName="registry-server" containerID="cri-o://f8883d32de03c86c9a797ddd2e66a4f9d2b3721cca238648db121bfb57ea417d" gracePeriod=2 Jan 23 19:23:59 crc kubenswrapper[4760]: E0123 19:23:59.112056 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafa16750_a263_41ac_b6ba_92c602d2f9e7.slice/crio-conmon-f8883d32de03c86c9a797ddd2e66a4f9d2b3721cca238648db121bfb57ea417d.scope\": RecentStats: unable to find data in memory cache]" Jan 23 19:23:59 crc kubenswrapper[4760]: I0123 19:23:59.542184 4760 generic.go:334] "Generic (PLEG): container finished" podID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerID="f8883d32de03c86c9a797ddd2e66a4f9d2b3721cca238648db121bfb57ea417d" exitCode=0 Jan 23 19:23:59 crc kubenswrapper[4760]: I0123 19:23:59.542278 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtsjn" event={"ID":"afa16750-a263-41ac-b6ba-92c602d2f9e7","Type":"ContainerDied","Data":"f8883d32de03c86c9a797ddd2e66a4f9d2b3721cca238648db121bfb57ea417d"} Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.196604 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.329195 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkswd\" (UniqueName: \"kubernetes.io/projected/afa16750-a263-41ac-b6ba-92c602d2f9e7-kube-api-access-mkswd\") pod \"afa16750-a263-41ac-b6ba-92c602d2f9e7\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.330089 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-utilities\") pod \"afa16750-a263-41ac-b6ba-92c602d2f9e7\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.330235 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-catalog-content\") pod \"afa16750-a263-41ac-b6ba-92c602d2f9e7\" (UID: \"afa16750-a263-41ac-b6ba-92c602d2f9e7\") " Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.330860 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-utilities" (OuterVolumeSpecName: "utilities") pod "afa16750-a263-41ac-b6ba-92c602d2f9e7" (UID: "afa16750-a263-41ac-b6ba-92c602d2f9e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.337431 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa16750-a263-41ac-b6ba-92c602d2f9e7-kube-api-access-mkswd" (OuterVolumeSpecName: "kube-api-access-mkswd") pod "afa16750-a263-41ac-b6ba-92c602d2f9e7" (UID: "afa16750-a263-41ac-b6ba-92c602d2f9e7"). InnerVolumeSpecName "kube-api-access-mkswd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.384869 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afa16750-a263-41ac-b6ba-92c602d2f9e7" (UID: "afa16750-a263-41ac-b6ba-92c602d2f9e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.432692 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.432745 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkswd\" (UniqueName: \"kubernetes.io/projected/afa16750-a263-41ac-b6ba-92c602d2f9e7-kube-api-access-mkswd\") on node \"crc\" DevicePath \"\"" Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.432757 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa16750-a263-41ac-b6ba-92c602d2f9e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.553662 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtsjn" event={"ID":"afa16750-a263-41ac-b6ba-92c602d2f9e7","Type":"ContainerDied","Data":"6bf5df093a791472ea561abe2396699a4bb1c07d8debf6a49cf0e2b9e175fbdc"} Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.553734 4760 scope.go:117] "RemoveContainer" containerID="f8883d32de03c86c9a797ddd2e66a4f9d2b3721cca238648db121bfb57ea417d" Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.553750 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtsjn" Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.573282 4760 scope.go:117] "RemoveContainer" containerID="317e237c3a285e4512d9b06d9b65b83397010dcbbfa9b8bffd7fb313e199606a" Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.586203 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rtsjn"] Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.596515 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rtsjn"] Jan 23 19:24:00 crc kubenswrapper[4760]: I0123 19:24:00.625489 4760 scope.go:117] "RemoveContainer" containerID="f4fe41049c994ded73620d5dd53147630e683a24f4d7c00d965ac395d3a12905" Jan 23 19:24:01 crc kubenswrapper[4760]: I0123 19:24:01.604552 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa16750-a263-41ac-b6ba-92c602d2f9e7" path="/var/lib/kubelet/pods/afa16750-a263-41ac-b6ba-92c602d2f9e7/volumes" Jan 23 19:24:06 crc kubenswrapper[4760]: I0123 19:24:06.466076 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5pm7w_d77acbc5-a14f-4002-ac5d-f6c90f44faf6/kube-rbac-proxy/0.log" Jan 23 19:24:06 crc kubenswrapper[4760]: I0123 19:24:06.550456 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5pm7w_d77acbc5-a14f-4002-ac5d-f6c90f44faf6/controller/0.log" Jan 23 19:24:06 crc kubenswrapper[4760]: I0123 19:24:06.705170 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-frr-files/0.log" Jan 23 19:24:06 crc kubenswrapper[4760]: I0123 19:24:06.893097 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-frr-files/0.log" Jan 23 19:24:06 crc kubenswrapper[4760]: I0123 19:24:06.939516 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-metrics/0.log" Jan 23 19:24:06 crc kubenswrapper[4760]: I0123 19:24:06.942904 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-reloader/0.log" Jan 23 19:24:06 crc kubenswrapper[4760]: I0123 19:24:06.954758 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-reloader/0.log" Jan 23 19:24:07 crc kubenswrapper[4760]: I0123 19:24:07.184912 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-reloader/0.log" Jan 23 19:24:07 crc kubenswrapper[4760]: I0123 19:24:07.193677 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-frr-files/0.log" Jan 23 19:24:07 crc kubenswrapper[4760]: I0123 19:24:07.193712 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-metrics/0.log" Jan 23 19:24:07 crc kubenswrapper[4760]: I0123 19:24:07.193685 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-metrics/0.log" Jan 23 19:24:07 crc kubenswrapper[4760]: I0123 19:24:07.400493 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-reloader/0.log" Jan 23 19:24:07 crc kubenswrapper[4760]: I0123 19:24:07.407882 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-frr-files/0.log" Jan 23 19:24:07 crc kubenswrapper[4760]: I0123 19:24:07.416471 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/controller/0.log" Jan 23 19:24:07 crc kubenswrapper[4760]: I0123 19:24:07.421713 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/cp-metrics/0.log" Jan 23 19:24:08 crc kubenswrapper[4760]: I0123 19:24:08.063859 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/kube-rbac-proxy/0.log" Jan 23 19:24:08 crc kubenswrapper[4760]: I0123 19:24:08.108081 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/frr-metrics/0.log" Jan 23 19:24:08 crc kubenswrapper[4760]: I0123 19:24:08.109111 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/kube-rbac-proxy-frr/0.log" Jan 23 19:24:08 crc kubenswrapper[4760]: I0123 19:24:08.288448 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/reloader/0.log" Jan 23 19:24:08 crc kubenswrapper[4760]: I0123 19:24:08.326745 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-lzmgt_bdfea4ed-b515-4324-a846-11743d9ae4ab/frr-k8s-webhook-server/0.log" Jan 23 19:24:08 crc kubenswrapper[4760]: I0123 19:24:08.645664 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5cd7b57f9b-wsfcj_21e5a15c-db54-43f3-8dd9-834d4a327edd/manager/0.log" Jan 23 19:24:08 crc kubenswrapper[4760]: I0123 19:24:08.767928 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-b8f8768df-khhb6_dc02afcf-f520-4bdf-a8ae-52c2a9c1857e/webhook-server/0.log" Jan 23 19:24:08 crc kubenswrapper[4760]: I0123 19:24:08.905546 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vcqvw_c57f00f5-beec-44ff-b9cc-83ed33ddc502/kube-rbac-proxy/0.log" Jan 23 19:24:09 crc kubenswrapper[4760]: I0123 19:24:09.470592 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vcqvw_c57f00f5-beec-44ff-b9cc-83ed33ddc502/speaker/0.log" Jan 23 19:24:09 crc kubenswrapper[4760]: I0123 19:24:09.834787 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cxwr6_e951e064-8240-463f-b524-295257f45405/frr/0.log" Jan 23 19:24:23 crc kubenswrapper[4760]: I0123 19:24:23.278338 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/util/0.log" Jan 23 19:24:23 crc kubenswrapper[4760]: I0123 19:24:23.512099 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/util/0.log" Jan 23 19:24:23 crc kubenswrapper[4760]: I0123 19:24:23.531718 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/pull/0.log" Jan 23 19:24:23 crc kubenswrapper[4760]: I0123 19:24:23.584691 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/pull/0.log" Jan 23 19:24:23 crc kubenswrapper[4760]: I0123 19:24:23.746643 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/extract/0.log" Jan 23 19:24:23 crc kubenswrapper[4760]: I0123 19:24:23.748986 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/util/0.log" Jan 23 19:24:23 crc kubenswrapper[4760]: I0123 19:24:23.754155 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5qkvw_6af4de6d-bd37-47ca-95b2-bf48577ef81c/pull/0.log" Jan 23 19:24:23 crc kubenswrapper[4760]: I0123 19:24:23.942856 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/util/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.113652 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/util/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.115847 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/pull/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.156232 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/pull/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.342114 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/extract/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.377027 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/pull/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.420011 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hqmd7_95b32adf-ef73-48db-9961-c29c4a278ff5/util/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.529263 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-utilities/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.729772 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-utilities/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.774311 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-content/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.777793 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-content/0.log" Jan 23 19:24:24 crc kubenswrapper[4760]: I0123 19:24:24.922127 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-utilities/0.log" Jan 23 19:24:25 crc kubenswrapper[4760]: I0123 19:24:25.023311 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/extract-content/0.log" Jan 23 19:24:25 crc kubenswrapper[4760]: I0123 19:24:25.235733 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-utilities/0.log" Jan 23 19:24:25 crc kubenswrapper[4760]: I0123 19:24:25.478693 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-content/0.log" Jan 23 19:24:25 crc kubenswrapper[4760]: I0123 19:24:25.483703 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-content/0.log" Jan 23 19:24:25 crc kubenswrapper[4760]: I0123 19:24:25.540684 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-utilities/0.log" Jan 23 19:24:25 crc kubenswrapper[4760]: I0123 19:24:25.608842 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2mmwq_6b1cac32-4085-41bd-83a1-f3488a2ca17f/registry-server/0.log" Jan 23 19:24:25 crc kubenswrapper[4760]: I0123 19:24:25.857061 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-utilities/0.log" Jan 23 19:24:25 crc kubenswrapper[4760]: I0123 19:24:25.878288 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/extract-content/0.log" Jan 23 19:24:26 crc kubenswrapper[4760]: I0123 19:24:26.450931 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2llpp_7dc5cdf6-ad52-4c3b-a100-08709b3e06c6/marketplace-operator/0.log" Jan 23 19:24:26 crc kubenswrapper[4760]: I0123 19:24:26.476226 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn5nq_8e461375-5185-45d5-9abe-89c57c170d0c/registry-server/0.log" Jan 23 19:24:26 crc kubenswrapper[4760]: I0123 19:24:26.493092 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-utilities/0.log" Jan 23 19:24:26 crc kubenswrapper[4760]: I0123 19:24:26.659283 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-content/0.log" Jan 23 19:24:26 crc kubenswrapper[4760]: I0123 19:24:26.686022 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-content/0.log" Jan 23 19:24:26 crc kubenswrapper[4760]: I0123 19:24:26.701026 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-utilities/0.log" Jan 23 19:24:26 crc kubenswrapper[4760]: I0123 19:24:26.916307 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-utilities/0.log" Jan 23 19:24:26 crc kubenswrapper[4760]: I0123 19:24:26.941197 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/extract-content/0.log" Jan 23 19:24:27 crc kubenswrapper[4760]: I0123 19:24:27.143292 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q5h26_139f0f56-e7d6-4950-8313-ba6047ab4955/registry-server/0.log" Jan 23 19:24:27 crc kubenswrapper[4760]: I0123 19:24:27.143911 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-utilities/0.log" Jan 23 19:24:27 crc kubenswrapper[4760]: I0123 19:24:27.344198 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-content/0.log" Jan 23 19:24:27 crc kubenswrapper[4760]: I0123 19:24:27.405516 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-content/0.log" Jan 23 19:24:27 crc kubenswrapper[4760]: I0123 19:24:27.422999 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-utilities/0.log" Jan 23 19:24:27 crc kubenswrapper[4760]: I0123 19:24:27.546265 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-utilities/0.log" Jan 23 19:24:27 crc kubenswrapper[4760]: I0123 19:24:27.586378 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/extract-content/0.log" Jan 23 19:24:28 crc kubenswrapper[4760]: I0123 19:24:28.296307 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94f29_d8c774d8-e0cd-4c3d-949d-dfdec7b1d31c/registry-server/0.log" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.645834 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rxmxd"] Jan 23 19:24:33 crc kubenswrapper[4760]: E0123 19:24:33.646968 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerName="registry-server" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.646987 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerName="registry-server" Jan 23 19:24:33 crc kubenswrapper[4760]: E0123 19:24:33.647023 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerName="extract-content" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.647033 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerName="extract-content" Jan 23 19:24:33 crc kubenswrapper[4760]: E0123 19:24:33.647054 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerName="extract-utilities" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.647062 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerName="extract-utilities" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.647264 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa16750-a263-41ac-b6ba-92c602d2f9e7" containerName="registry-server" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.649009 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.663382 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxmxd"] Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.798539 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-utilities\") pod \"community-operators-rxmxd\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.799348 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdkt6\" (UniqueName: \"kubernetes.io/projected/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-kube-api-access-mdkt6\") pod \"community-operators-rxmxd\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.799542 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-catalog-content\") pod \"community-operators-rxmxd\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.901566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdkt6\" (UniqueName: \"kubernetes.io/projected/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-kube-api-access-mdkt6\") pod \"community-operators-rxmxd\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.901684 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-catalog-content\") pod \"community-operators-rxmxd\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.901762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-utilities\") pod \"community-operators-rxmxd\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.902172 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-utilities\") pod \"community-operators-rxmxd\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.902209 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-catalog-content\") pod \"community-operators-rxmxd\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.920459 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdkt6\" (UniqueName: \"kubernetes.io/projected/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-kube-api-access-mdkt6\") pod \"community-operators-rxmxd\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:33 crc kubenswrapper[4760]: I0123 19:24:33.968015 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:34 crc kubenswrapper[4760]: I0123 19:24:34.540506 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxmxd"] Jan 23 19:24:34 crc kubenswrapper[4760]: I0123 19:24:34.827191 4760 generic.go:334] "Generic (PLEG): container finished" podID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerID="65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9" exitCode=0 Jan 23 19:24:34 crc kubenswrapper[4760]: I0123 19:24:34.827243 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmxd" event={"ID":"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7","Type":"ContainerDied","Data":"65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9"} Jan 23 19:24:34 crc kubenswrapper[4760]: I0123 19:24:34.827279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmxd" event={"ID":"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7","Type":"ContainerStarted","Data":"c0b1504e3899420d0b3307b247678f6d761c4a45263f754a628a734c77a5b0d8"} Jan 23 19:24:36 crc kubenswrapper[4760]: I0123 19:24:36.845008 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmxd" event={"ID":"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7","Type":"ContainerStarted","Data":"ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080"} Jan 23 19:24:37 crc kubenswrapper[4760]: I0123 19:24:37.855998 4760 generic.go:334] "Generic (PLEG): container finished" podID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerID="ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080" exitCode=0 Jan 23 19:24:37 crc kubenswrapper[4760]: I0123 19:24:37.856050 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmxd" event={"ID":"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7","Type":"ContainerDied","Data":"ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080"} Jan 23 19:24:39 crc kubenswrapper[4760]: I0123 19:24:39.874603 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmxd" event={"ID":"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7","Type":"ContainerStarted","Data":"b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0"} Jan 23 19:24:43 crc kubenswrapper[4760]: I0123 19:24:43.969609 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:43 crc kubenswrapper[4760]: I0123 19:24:43.970497 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:44 crc kubenswrapper[4760]: I0123 19:24:44.019980 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:44 crc kubenswrapper[4760]: I0123 19:24:44.037544 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rxmxd" podStartSLOduration=7.598443825 podStartE2EDuration="11.037526611s" podCreationTimestamp="2026-01-23 19:24:33 +0000 UTC" firstStartedPulling="2026-01-23 19:24:34.828889728 +0000 UTC m=+5017.831347651" lastFinishedPulling="2026-01-23 19:24:38.267972504 +0000 UTC m=+5021.270430437" observedRunningTime="2026-01-23 19:24:39.896859288 +0000 UTC m=+5022.899317231" watchObservedRunningTime="2026-01-23 19:24:44.037526611 +0000 UTC m=+5027.039984564" Jan 23 19:24:44 crc kubenswrapper[4760]: I0123 19:24:44.978450 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:45 crc kubenswrapper[4760]: I0123 19:24:45.051305 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxmxd"] Jan 23 19:24:45 crc kubenswrapper[4760]: E0123 19:24:45.328209 4760 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.90:46736->38.129.56.90:34677: read tcp 38.129.56.90:46736->38.129.56.90:34677: read: connection reset by peer Jan 23 19:24:45 crc kubenswrapper[4760]: E0123 19:24:45.806620 4760 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.90:46748->38.129.56.90:34677: write tcp 38.129.56.90:46748->38.129.56.90:34677: write: broken pipe Jan 23 19:24:46 crc kubenswrapper[4760]: I0123 19:24:46.934059 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rxmxd" podUID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerName="registry-server" containerID="cri-o://b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0" gracePeriod=2 Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.513911 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.659899 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdkt6\" (UniqueName: \"kubernetes.io/projected/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-kube-api-access-mdkt6\") pod \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.659991 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-utilities\") pod \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.660046 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-catalog-content\") pod \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\" (UID: \"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7\") " Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.660687 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-utilities" (OuterVolumeSpecName: "utilities") pod "af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" (UID: "af8922f8-45c7-452e-a7fb-ac71c2e0d8b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.662427 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.667699 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-kube-api-access-mdkt6" (OuterVolumeSpecName: "kube-api-access-mdkt6") pod "af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" (UID: "af8922f8-45c7-452e-a7fb-ac71c2e0d8b7"). InnerVolumeSpecName "kube-api-access-mdkt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.706642 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" (UID: "af8922f8-45c7-452e-a7fb-ac71c2e0d8b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.764746 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdkt6\" (UniqueName: \"kubernetes.io/projected/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-kube-api-access-mdkt6\") on node \"crc\" DevicePath \"\"" Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.764978 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.944338 4760 generic.go:334] "Generic (PLEG): container finished" podID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerID="b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0" exitCode=0 Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.944389 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmxd" event={"ID":"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7","Type":"ContainerDied","Data":"b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0"} Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.944418 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmxd" Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.944441 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmxd" event={"ID":"af8922f8-45c7-452e-a7fb-ac71c2e0d8b7","Type":"ContainerDied","Data":"c0b1504e3899420d0b3307b247678f6d761c4a45263f754a628a734c77a5b0d8"} Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.944460 4760 scope.go:117] "RemoveContainer" containerID="b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0" Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.963706 4760 scope.go:117] "RemoveContainer" containerID="ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080" Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.994564 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxmxd"] Jan 23 19:24:47 crc kubenswrapper[4760]: I0123 19:24:47.999776 4760 scope.go:117] "RemoveContainer" containerID="65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9" Jan 23 19:24:48 crc kubenswrapper[4760]: I0123 19:24:48.005015 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rxmxd"] Jan 23 19:24:48 crc kubenswrapper[4760]: I0123 19:24:48.040073 4760 scope.go:117] "RemoveContainer" containerID="b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0" Jan 23 19:24:48 crc kubenswrapper[4760]: E0123 19:24:48.040625 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0\": container with ID starting with b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0 not found: ID does not exist" containerID="b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0" Jan 23 19:24:48 crc kubenswrapper[4760]: I0123 19:24:48.040668 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0"} err="failed to get container status \"b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0\": rpc error: code = NotFound desc = could not find container \"b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0\": container with ID starting with b8b7aff5b7b7e3560da7021a0a0fd553ff33bad94a6d37870a6ffe6be57e17e0 not found: ID does not exist" Jan 23 19:24:48 crc kubenswrapper[4760]: I0123 19:24:48.040699 4760 scope.go:117] "RemoveContainer" containerID="ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080" Jan 23 19:24:48 crc kubenswrapper[4760]: E0123 19:24:48.041137 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080\": container with ID starting with ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080 not found: ID does not exist" containerID="ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080" Jan 23 19:24:48 crc kubenswrapper[4760]: I0123 19:24:48.041172 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080"} err="failed to get container status \"ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080\": rpc error: code = NotFound desc = could not find container \"ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080\": container with ID starting with ad3a255348742a32c1d0dacc62439e1a3dca0e51b5b15c88ff8e4333ae39a080 not found: ID does not exist" Jan 23 19:24:48 crc kubenswrapper[4760]: I0123 19:24:48.041207 4760 scope.go:117] "RemoveContainer" containerID="65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9" Jan 23 19:24:48 crc kubenswrapper[4760]: E0123 19:24:48.041534 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9\": container with ID starting with 65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9 not found: ID does not exist" containerID="65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9" Jan 23 19:24:48 crc kubenswrapper[4760]: I0123 19:24:48.041597 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9"} err="failed to get container status \"65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9\": rpc error: code = NotFound desc = could not find container \"65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9\": container with ID starting with 65a9a8023dc6433fea80b129ab217916bf2ca531ea52984a01306d613eefefb9 not found: ID does not exist" Jan 23 19:24:49 crc kubenswrapper[4760]: I0123 19:24:49.606771 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" path="/var/lib/kubelet/pods/af8922f8-45c7-452e-a7fb-ac71c2e0d8b7/volumes" Jan 23 19:25:46 crc kubenswrapper[4760]: I0123 19:25:46.075798 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:25:46 crc kubenswrapper[4760]: I0123 19:25:46.076329 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:26:16 crc kubenswrapper[4760]: I0123 19:26:16.076344 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:26:16 crc kubenswrapper[4760]: I0123 19:26:16.076949 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:26:30 crc kubenswrapper[4760]: I0123 19:26:30.941268 4760 generic.go:334] "Generic (PLEG): container finished" podID="891c50bf-efcf-4171-ac9d-15ed375f2f14" containerID="a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190" exitCode=0 Jan 23 19:26:30 crc kubenswrapper[4760]: I0123 19:26:30.941329 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rgpk7/must-gather-kd2zc" event={"ID":"891c50bf-efcf-4171-ac9d-15ed375f2f14","Type":"ContainerDied","Data":"a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190"} Jan 23 19:26:30 crc kubenswrapper[4760]: I0123 19:26:30.942437 4760 scope.go:117] "RemoveContainer" containerID="a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190" Jan 23 19:26:31 crc kubenswrapper[4760]: I0123 19:26:31.079575 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rgpk7_must-gather-kd2zc_891c50bf-efcf-4171-ac9d-15ed375f2f14/gather/0.log" Jan 23 19:26:42 crc kubenswrapper[4760]: I0123 19:26:42.339889 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rgpk7/must-gather-kd2zc"] Jan 23 19:26:42 crc kubenswrapper[4760]: I0123 19:26:42.340919 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rgpk7/must-gather-kd2zc" podUID="891c50bf-efcf-4171-ac9d-15ed375f2f14" containerName="copy" containerID="cri-o://4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa" gracePeriod=2 Jan 23 19:26:42 crc kubenswrapper[4760]: I0123 19:26:42.351547 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rgpk7/must-gather-kd2zc"] Jan 23 19:26:42 crc kubenswrapper[4760]: I0123 19:26:42.880964 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rgpk7_must-gather-kd2zc_891c50bf-efcf-4171-ac9d-15ed375f2f14/copy/0.log" Jan 23 19:26:42 crc kubenswrapper[4760]: I0123 19:26:42.881694 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/must-gather-kd2zc" Jan 23 19:26:42 crc kubenswrapper[4760]: I0123 19:26:42.991090 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n8kc\" (UniqueName: \"kubernetes.io/projected/891c50bf-efcf-4171-ac9d-15ed375f2f14-kube-api-access-2n8kc\") pod \"891c50bf-efcf-4171-ac9d-15ed375f2f14\" (UID: \"891c50bf-efcf-4171-ac9d-15ed375f2f14\") " Jan 23 19:26:42 crc kubenswrapper[4760]: I0123 19:26:42.991285 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/891c50bf-efcf-4171-ac9d-15ed375f2f14-must-gather-output\") pod \"891c50bf-efcf-4171-ac9d-15ed375f2f14\" (UID: \"891c50bf-efcf-4171-ac9d-15ed375f2f14\") " Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.055766 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rgpk7_must-gather-kd2zc_891c50bf-efcf-4171-ac9d-15ed375f2f14/copy/0.log" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.056925 4760 generic.go:334] "Generic (PLEG): container finished" podID="891c50bf-efcf-4171-ac9d-15ed375f2f14" containerID="4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa" exitCode=143 Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.056983 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rgpk7/must-gather-kd2zc" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.056989 4760 scope.go:117] "RemoveContainer" containerID="4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.078534 4760 scope.go:117] "RemoveContainer" containerID="a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.181973 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891c50bf-efcf-4171-ac9d-15ed375f2f14-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "891c50bf-efcf-4171-ac9d-15ed375f2f14" (UID: "891c50bf-efcf-4171-ac9d-15ed375f2f14"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.194618 4760 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/891c50bf-efcf-4171-ac9d-15ed375f2f14-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.585681 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891c50bf-efcf-4171-ac9d-15ed375f2f14-kube-api-access-2n8kc" (OuterVolumeSpecName: "kube-api-access-2n8kc") pod "891c50bf-efcf-4171-ac9d-15ed375f2f14" (UID: "891c50bf-efcf-4171-ac9d-15ed375f2f14"). InnerVolumeSpecName "kube-api-access-2n8kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.601999 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n8kc\" (UniqueName: \"kubernetes.io/projected/891c50bf-efcf-4171-ac9d-15ed375f2f14-kube-api-access-2n8kc\") on node \"crc\" DevicePath \"\"" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.606818 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="891c50bf-efcf-4171-ac9d-15ed375f2f14" path="/var/lib/kubelet/pods/891c50bf-efcf-4171-ac9d-15ed375f2f14/volumes" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.799650 4760 scope.go:117] "RemoveContainer" containerID="4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa" Jan 23 19:26:43 crc kubenswrapper[4760]: E0123 19:26:43.800826 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa\": container with ID starting with 4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa not found: ID does not exist" containerID="4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.800888 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa"} err="failed to get container status \"4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa\": rpc error: code = NotFound desc = could not find container \"4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa\": container with ID starting with 4ccc24df554e86d0ae339e51c76d8d0784c3f0c157db463c908e61fbe409ccaa not found: ID does not exist" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.800927 4760 scope.go:117] "RemoveContainer" containerID="a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190" Jan 23 19:26:43 crc kubenswrapper[4760]: E0123 19:26:43.801494 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190\": container with ID starting with a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190 not found: ID does not exist" containerID="a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190" Jan 23 19:26:43 crc kubenswrapper[4760]: I0123 19:26:43.801524 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190"} err="failed to get container status \"a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190\": rpc error: code = NotFound desc = could not find container \"a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190\": container with ID starting with a405bef95d876aa95c3672c386d1b48f22d8eb94281913a1c55148617d6cb190 not found: ID does not exist" Jan 23 19:26:46 crc kubenswrapper[4760]: I0123 19:26:46.077678 4760 patch_prober.go:28] interesting pod/machine-config-daemon-6xsk7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:26:46 crc kubenswrapper[4760]: I0123 19:26:46.078160 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:26:46 crc kubenswrapper[4760]: I0123 19:26:46.078199 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" Jan 23 19:26:46 crc kubenswrapper[4760]: I0123 19:26:46.078879 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4"} pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:26:46 crc kubenswrapper[4760]: I0123 19:26:46.078933 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" containerName="machine-config-daemon" containerID="cri-o://e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" gracePeriod=600 Jan 23 19:26:46 crc kubenswrapper[4760]: E0123 19:26:46.201881 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:26:47 crc kubenswrapper[4760]: I0123 19:26:47.090594 4760 generic.go:334] "Generic (PLEG): container finished" podID="20652c61-310f-464d-ae66-dfc025a16b8d" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" exitCode=0 Jan 23 19:26:47 crc kubenswrapper[4760]: I0123 19:26:47.090690 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" event={"ID":"20652c61-310f-464d-ae66-dfc025a16b8d","Type":"ContainerDied","Data":"e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4"} Jan 23 19:26:47 crc kubenswrapper[4760]: I0123 19:26:47.090879 4760 scope.go:117] "RemoveContainer" containerID="b05418f9558272a495fa54d74832f4d63d76f79b825371eb93640844261e899f" Jan 23 19:26:47 crc kubenswrapper[4760]: I0123 19:26:47.091768 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:26:47 crc kubenswrapper[4760]: E0123 19:26:47.092165 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:27:00 crc kubenswrapper[4760]: I0123 19:27:00.595795 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:27:00 crc kubenswrapper[4760]: E0123 19:27:00.596881 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:27:09 crc kubenswrapper[4760]: I0123 19:27:09.589877 4760 scope.go:117] "RemoveContainer" containerID="a5bc20a102a32478252bd7a3073372f998b4a2f9b074e07e15b554e88b73ffd9" Jan 23 19:27:09 crc kubenswrapper[4760]: I0123 19:27:09.611813 4760 scope.go:117] "RemoveContainer" containerID="91af40cc6f347e7a30710f324ffb0d95ef7022bad96d1fcc3ece4846dcb755c2" Jan 23 19:27:11 crc kubenswrapper[4760]: I0123 19:27:11.596778 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:27:11 crc kubenswrapper[4760]: E0123 19:27:11.597337 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.042157 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9whz4"] Jan 23 19:27:13 crc kubenswrapper[4760]: E0123 19:27:13.042963 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerName="registry-server" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.042981 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerName="registry-server" Jan 23 19:27:13 crc kubenswrapper[4760]: E0123 19:27:13.042999 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerName="extract-content" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.043007 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerName="extract-content" Jan 23 19:27:13 crc kubenswrapper[4760]: E0123 19:27:13.043017 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891c50bf-efcf-4171-ac9d-15ed375f2f14" containerName="gather" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.043024 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="891c50bf-efcf-4171-ac9d-15ed375f2f14" containerName="gather" Jan 23 19:27:13 crc kubenswrapper[4760]: E0123 19:27:13.043034 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891c50bf-efcf-4171-ac9d-15ed375f2f14" containerName="copy" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.043040 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="891c50bf-efcf-4171-ac9d-15ed375f2f14" containerName="copy" Jan 23 19:27:13 crc kubenswrapper[4760]: E0123 19:27:13.043057 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerName="extract-utilities" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.043063 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerName="extract-utilities" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.043221 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="891c50bf-efcf-4171-ac9d-15ed375f2f14" containerName="gather" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.043236 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8922f8-45c7-452e-a7fb-ac71c2e0d8b7" containerName="registry-server" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.043246 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="891c50bf-efcf-4171-ac9d-15ed375f2f14" containerName="copy" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.044600 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.051695 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9whz4"] Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.192485 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-catalog-content\") pod \"redhat-operators-9whz4\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.192632 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhd48\" (UniqueName: \"kubernetes.io/projected/54d28431-4275-4177-bff4-8d41365d4b25-kube-api-access-hhd48\") pod \"redhat-operators-9whz4\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.192665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-utilities\") pod \"redhat-operators-9whz4\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.294674 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-catalog-content\") pod \"redhat-operators-9whz4\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.294815 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhd48\" (UniqueName: \"kubernetes.io/projected/54d28431-4275-4177-bff4-8d41365d4b25-kube-api-access-hhd48\") pod \"redhat-operators-9whz4\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.294843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-utilities\") pod \"redhat-operators-9whz4\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.295202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-catalog-content\") pod \"redhat-operators-9whz4\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.295372 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-utilities\") pod \"redhat-operators-9whz4\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.476401 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhd48\" (UniqueName: \"kubernetes.io/projected/54d28431-4275-4177-bff4-8d41365d4b25-kube-api-access-hhd48\") pod \"redhat-operators-9whz4\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:13 crc kubenswrapper[4760]: I0123 19:27:13.668904 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:14 crc kubenswrapper[4760]: I0123 19:27:14.156347 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9whz4"] Jan 23 19:27:14 crc kubenswrapper[4760]: W0123 19:27:14.160490 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54d28431_4275_4177_bff4_8d41365d4b25.slice/crio-50083d07cbdd09e4e870845b53cc4af196b4f93f8681d0789fcb3c1fa3112c21 WatchSource:0}: Error finding container 50083d07cbdd09e4e870845b53cc4af196b4f93f8681d0789fcb3c1fa3112c21: Status 404 returned error can't find the container with id 50083d07cbdd09e4e870845b53cc4af196b4f93f8681d0789fcb3c1fa3112c21 Jan 23 19:27:14 crc kubenswrapper[4760]: I0123 19:27:14.326690 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9whz4" event={"ID":"54d28431-4275-4177-bff4-8d41365d4b25","Type":"ContainerStarted","Data":"50083d07cbdd09e4e870845b53cc4af196b4f93f8681d0789fcb3c1fa3112c21"} Jan 23 19:27:15 crc kubenswrapper[4760]: I0123 19:27:15.335999 4760 generic.go:334] "Generic (PLEG): container finished" podID="54d28431-4275-4177-bff4-8d41365d4b25" containerID="60f94cd2931ca0ebab8acacd28888eae66f6d53532d9e5623735a4dfb73f61ce" exitCode=0 Jan 23 19:27:15 crc kubenswrapper[4760]: I0123 19:27:15.336064 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9whz4" event={"ID":"54d28431-4275-4177-bff4-8d41365d4b25","Type":"ContainerDied","Data":"60f94cd2931ca0ebab8acacd28888eae66f6d53532d9e5623735a4dfb73f61ce"} Jan 23 19:27:15 crc kubenswrapper[4760]: I0123 19:27:15.341631 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:27:16 crc kubenswrapper[4760]: I0123 19:27:16.345310 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9whz4" event={"ID":"54d28431-4275-4177-bff4-8d41365d4b25","Type":"ContainerStarted","Data":"2c747a3d15d97a35417865f1df888562a371fda342c5f23b2126da0138bdecbe"} Jan 23 19:27:17 crc kubenswrapper[4760]: I0123 19:27:17.356755 4760 generic.go:334] "Generic (PLEG): container finished" podID="54d28431-4275-4177-bff4-8d41365d4b25" containerID="2c747a3d15d97a35417865f1df888562a371fda342c5f23b2126da0138bdecbe" exitCode=0 Jan 23 19:27:17 crc kubenswrapper[4760]: I0123 19:27:17.356830 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9whz4" event={"ID":"54d28431-4275-4177-bff4-8d41365d4b25","Type":"ContainerDied","Data":"2c747a3d15d97a35417865f1df888562a371fda342c5f23b2126da0138bdecbe"} Jan 23 19:27:18 crc kubenswrapper[4760]: I0123 19:27:18.367440 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9whz4" event={"ID":"54d28431-4275-4177-bff4-8d41365d4b25","Type":"ContainerStarted","Data":"d9677d103d5a7e94d8d38d504db962f76476e48e2e349b894e272a8dd5826099"} Jan 23 19:27:18 crc kubenswrapper[4760]: I0123 19:27:18.388267 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9whz4" podStartSLOduration=2.8806863419999997 podStartE2EDuration="5.388246309s" podCreationTimestamp="2026-01-23 19:27:13 +0000 UTC" firstStartedPulling="2026-01-23 19:27:15.341294008 +0000 UTC m=+5178.343751941" lastFinishedPulling="2026-01-23 19:27:17.848853935 +0000 UTC m=+5180.851311908" observedRunningTime="2026-01-23 19:27:18.387731476 +0000 UTC m=+5181.390189399" watchObservedRunningTime="2026-01-23 19:27:18.388246309 +0000 UTC m=+5181.390704242" Jan 23 19:27:23 crc kubenswrapper[4760]: I0123 19:27:23.669175 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:23 crc kubenswrapper[4760]: I0123 19:27:23.669621 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:23 crc kubenswrapper[4760]: I0123 19:27:23.713069 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:24 crc kubenswrapper[4760]: I0123 19:27:24.486973 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:24 crc kubenswrapper[4760]: I0123 19:27:24.537062 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9whz4"] Jan 23 19:27:24 crc kubenswrapper[4760]: I0123 19:27:24.596267 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:27:24 crc kubenswrapper[4760]: E0123 19:27:24.596532 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:27:26 crc kubenswrapper[4760]: I0123 19:27:26.454548 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9whz4" podUID="54d28431-4275-4177-bff4-8d41365d4b25" containerName="registry-server" containerID="cri-o://d9677d103d5a7e94d8d38d504db962f76476e48e2e349b894e272a8dd5826099" gracePeriod=2 Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.480526 4760 generic.go:334] "Generic (PLEG): container finished" podID="54d28431-4275-4177-bff4-8d41365d4b25" containerID="d9677d103d5a7e94d8d38d504db962f76476e48e2e349b894e272a8dd5826099" exitCode=0 Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.480611 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9whz4" event={"ID":"54d28431-4275-4177-bff4-8d41365d4b25","Type":"ContainerDied","Data":"d9677d103d5a7e94d8d38d504db962f76476e48e2e349b894e272a8dd5826099"} Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.625849 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.748335 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-catalog-content\") pod \"54d28431-4275-4177-bff4-8d41365d4b25\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.748611 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhd48\" (UniqueName: \"kubernetes.io/projected/54d28431-4275-4177-bff4-8d41365d4b25-kube-api-access-hhd48\") pod \"54d28431-4275-4177-bff4-8d41365d4b25\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.748644 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-utilities\") pod \"54d28431-4275-4177-bff4-8d41365d4b25\" (UID: \"54d28431-4275-4177-bff4-8d41365d4b25\") " Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.750568 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-utilities" (OuterVolumeSpecName: "utilities") pod "54d28431-4275-4177-bff4-8d41365d4b25" (UID: "54d28431-4275-4177-bff4-8d41365d4b25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.755276 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d28431-4275-4177-bff4-8d41365d4b25-kube-api-access-hhd48" (OuterVolumeSpecName: "kube-api-access-hhd48") pod "54d28431-4275-4177-bff4-8d41365d4b25" (UID: "54d28431-4275-4177-bff4-8d41365d4b25"). InnerVolumeSpecName "kube-api-access-hhd48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.851294 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhd48\" (UniqueName: \"kubernetes.io/projected/54d28431-4275-4177-bff4-8d41365d4b25-kube-api-access-hhd48\") on node \"crc\" DevicePath \"\"" Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.851324 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.863986 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54d28431-4275-4177-bff4-8d41365d4b25" (UID: "54d28431-4275-4177-bff4-8d41365d4b25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:27:29 crc kubenswrapper[4760]: I0123 19:27:29.953331 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d28431-4275-4177-bff4-8d41365d4b25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:27:30 crc kubenswrapper[4760]: I0123 19:27:30.490861 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9whz4" event={"ID":"54d28431-4275-4177-bff4-8d41365d4b25","Type":"ContainerDied","Data":"50083d07cbdd09e4e870845b53cc4af196b4f93f8681d0789fcb3c1fa3112c21"} Jan 23 19:27:30 crc kubenswrapper[4760]: I0123 19:27:30.491239 4760 scope.go:117] "RemoveContainer" containerID="d9677d103d5a7e94d8d38d504db962f76476e48e2e349b894e272a8dd5826099" Jan 23 19:27:30 crc kubenswrapper[4760]: I0123 19:27:30.490960 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9whz4" Jan 23 19:27:30 crc kubenswrapper[4760]: I0123 19:27:30.510188 4760 scope.go:117] "RemoveContainer" containerID="2c747a3d15d97a35417865f1df888562a371fda342c5f23b2126da0138bdecbe" Jan 23 19:27:30 crc kubenswrapper[4760]: I0123 19:27:30.530674 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9whz4"] Jan 23 19:27:30 crc kubenswrapper[4760]: I0123 19:27:30.538259 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9whz4"] Jan 23 19:27:30 crc kubenswrapper[4760]: I0123 19:27:30.546352 4760 scope.go:117] "RemoveContainer" containerID="60f94cd2931ca0ebab8acacd28888eae66f6d53532d9e5623735a4dfb73f61ce" Jan 23 19:27:31 crc kubenswrapper[4760]: I0123 19:27:31.606039 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d28431-4275-4177-bff4-8d41365d4b25" path="/var/lib/kubelet/pods/54d28431-4275-4177-bff4-8d41365d4b25/volumes" Jan 23 19:27:36 crc kubenswrapper[4760]: I0123 19:27:36.595031 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:27:36 crc kubenswrapper[4760]: E0123 19:27:36.595819 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:27:50 crc kubenswrapper[4760]: I0123 19:27:50.594962 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:27:50 crc kubenswrapper[4760]: E0123 19:27:50.595666 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:28:01 crc kubenswrapper[4760]: I0123 19:28:01.595662 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:28:01 crc kubenswrapper[4760]: E0123 19:28:01.596378 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:28:15 crc kubenswrapper[4760]: I0123 19:28:15.598252 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:28:15 crc kubenswrapper[4760]: E0123 19:28:15.599090 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:28:30 crc kubenswrapper[4760]: I0123 19:28:30.595599 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:28:30 crc kubenswrapper[4760]: E0123 19:28:30.596400 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:28:44 crc kubenswrapper[4760]: I0123 19:28:44.595436 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:28:44 crc kubenswrapper[4760]: E0123 19:28:44.596106 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:28:57 crc kubenswrapper[4760]: I0123 19:28:57.602787 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:28:57 crc kubenswrapper[4760]: E0123 19:28:57.605077 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:29:08 crc kubenswrapper[4760]: I0123 19:29:08.595861 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:29:08 crc kubenswrapper[4760]: E0123 19:29:08.597908 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:29:19 crc kubenswrapper[4760]: I0123 19:29:19.595748 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:29:19 crc kubenswrapper[4760]: E0123 19:29:19.596458 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:29:31 crc kubenswrapper[4760]: I0123 19:29:31.595202 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:29:31 crc kubenswrapper[4760]: E0123 19:29:31.595917 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:29:45 crc kubenswrapper[4760]: I0123 19:29:45.599519 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:29:45 crc kubenswrapper[4760]: E0123 19:29:45.602903 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.323589 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4bc2c"] Jan 23 19:29:54 crc kubenswrapper[4760]: E0123 19:29:54.324815 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d28431-4275-4177-bff4-8d41365d4b25" containerName="registry-server" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.324837 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d28431-4275-4177-bff4-8d41365d4b25" containerName="registry-server" Jan 23 19:29:54 crc kubenswrapper[4760]: E0123 19:29:54.324873 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d28431-4275-4177-bff4-8d41365d4b25" containerName="extract-content" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.324885 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d28431-4275-4177-bff4-8d41365d4b25" containerName="extract-content" Jan 23 19:29:54 crc kubenswrapper[4760]: E0123 19:29:54.324903 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d28431-4275-4177-bff4-8d41365d4b25" containerName="extract-utilities" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.324913 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d28431-4275-4177-bff4-8d41365d4b25" containerName="extract-utilities" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.325256 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d28431-4275-4177-bff4-8d41365d4b25" containerName="registry-server" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.327497 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.335529 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bc2c"] Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.452234 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-catalog-content\") pod \"redhat-marketplace-4bc2c\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.452290 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-utilities\") pod \"redhat-marketplace-4bc2c\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.452323 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ghbw\" (UniqueName: \"kubernetes.io/projected/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-kube-api-access-4ghbw\") pod \"redhat-marketplace-4bc2c\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.555163 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-catalog-content\") pod \"redhat-marketplace-4bc2c\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.555228 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-utilities\") pod \"redhat-marketplace-4bc2c\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.555262 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ghbw\" (UniqueName: \"kubernetes.io/projected/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-kube-api-access-4ghbw\") pod \"redhat-marketplace-4bc2c\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.555793 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-catalog-content\") pod \"redhat-marketplace-4bc2c\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.555833 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-utilities\") pod \"redhat-marketplace-4bc2c\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.575897 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ghbw\" (UniqueName: \"kubernetes.io/projected/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-kube-api-access-4ghbw\") pod \"redhat-marketplace-4bc2c\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:54 crc kubenswrapper[4760]: I0123 19:29:54.655874 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:29:55 crc kubenswrapper[4760]: I0123 19:29:55.199195 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bc2c"] Jan 23 19:29:55 crc kubenswrapper[4760]: I0123 19:29:55.788887 4760 generic.go:334] "Generic (PLEG): container finished" podID="917c46f6-bba2-43b4-a8d4-59b9ac3bd623" containerID="ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5" exitCode=0 Jan 23 19:29:55 crc kubenswrapper[4760]: I0123 19:29:55.789038 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bc2c" event={"ID":"917c46f6-bba2-43b4-a8d4-59b9ac3bd623","Type":"ContainerDied","Data":"ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5"} Jan 23 19:29:55 crc kubenswrapper[4760]: I0123 19:29:55.790247 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bc2c" event={"ID":"917c46f6-bba2-43b4-a8d4-59b9ac3bd623","Type":"ContainerStarted","Data":"1a0ee8836995dae610b7b43cb4a4b07b18a74e5e16e0292ce3a226c5c595f596"} Jan 23 19:29:57 crc kubenswrapper[4760]: I0123 19:29:57.609047 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:29:57 crc kubenswrapper[4760]: E0123 19:29:57.610884 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:29:57 crc kubenswrapper[4760]: I0123 19:29:57.811595 4760 generic.go:334] "Generic (PLEG): container finished" podID="917c46f6-bba2-43b4-a8d4-59b9ac3bd623" containerID="865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a" exitCode=0 Jan 23 19:29:57 crc kubenswrapper[4760]: I0123 19:29:57.811648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bc2c" event={"ID":"917c46f6-bba2-43b4-a8d4-59b9ac3bd623","Type":"ContainerDied","Data":"865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a"} Jan 23 19:29:58 crc kubenswrapper[4760]: I0123 19:29:58.825013 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bc2c" event={"ID":"917c46f6-bba2-43b4-a8d4-59b9ac3bd623","Type":"ContainerStarted","Data":"fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5"} Jan 23 19:29:58 crc kubenswrapper[4760]: I0123 19:29:58.848156 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4bc2c" podStartSLOduration=2.3467951830000002 podStartE2EDuration="4.848132826s" podCreationTimestamp="2026-01-23 19:29:54 +0000 UTC" firstStartedPulling="2026-01-23 19:29:55.791102718 +0000 UTC m=+5338.793560651" lastFinishedPulling="2026-01-23 19:29:58.292440361 +0000 UTC m=+5341.294898294" observedRunningTime="2026-01-23 19:29:58.839666352 +0000 UTC m=+5341.842124285" watchObservedRunningTime="2026-01-23 19:29:58.848132826 +0000 UTC m=+5341.850590759" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.154552 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk"] Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.156826 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.161352 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.161689 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.173667 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk"] Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.310800 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-config-volume\") pod \"collect-profiles-29486610-927nk\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.310853 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47mnp\" (UniqueName: \"kubernetes.io/projected/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-kube-api-access-47mnp\") pod \"collect-profiles-29486610-927nk\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.310942 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-secret-volume\") pod \"collect-profiles-29486610-927nk\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.412253 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-config-volume\") pod \"collect-profiles-29486610-927nk\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.412298 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47mnp\" (UniqueName: \"kubernetes.io/projected/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-kube-api-access-47mnp\") pod \"collect-profiles-29486610-927nk\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.412364 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-secret-volume\") pod \"collect-profiles-29486610-927nk\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.413387 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-config-volume\") pod \"collect-profiles-29486610-927nk\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.417649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-secret-volume\") pod \"collect-profiles-29486610-927nk\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.429620 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47mnp\" (UniqueName: \"kubernetes.io/projected/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-kube-api-access-47mnp\") pod \"collect-profiles-29486610-927nk\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.484105 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:00 crc kubenswrapper[4760]: W0123 19:30:00.925585 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4053e9d2_dd8c_4a6f_94ec_3658edf315b2.slice/crio-f1ff89ced53c3d04a784b8180c1d738bb863b38a2a08f5da58dcb4606f95da4f WatchSource:0}: Error finding container f1ff89ced53c3d04a784b8180c1d738bb863b38a2a08f5da58dcb4606f95da4f: Status 404 returned error can't find the container with id f1ff89ced53c3d04a784b8180c1d738bb863b38a2a08f5da58dcb4606f95da4f Jan 23 19:30:00 crc kubenswrapper[4760]: I0123 19:30:00.926161 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk"] Jan 23 19:30:01 crc kubenswrapper[4760]: I0123 19:30:01.850912 4760 generic.go:334] "Generic (PLEG): container finished" podID="4053e9d2-dd8c-4a6f-94ec-3658edf315b2" containerID="37b3d0dacba67dcf4c83960407a7634462ce836f7e660fcb71e6c3cdae12a36b" exitCode=0 Jan 23 19:30:01 crc kubenswrapper[4760]: I0123 19:30:01.851199 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" event={"ID":"4053e9d2-dd8c-4a6f-94ec-3658edf315b2","Type":"ContainerDied","Data":"37b3d0dacba67dcf4c83960407a7634462ce836f7e660fcb71e6c3cdae12a36b"} Jan 23 19:30:01 crc kubenswrapper[4760]: I0123 19:30:01.851228 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" event={"ID":"4053e9d2-dd8c-4a6f-94ec-3658edf315b2","Type":"ContainerStarted","Data":"f1ff89ced53c3d04a784b8180c1d738bb863b38a2a08f5da58dcb4606f95da4f"} Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.175898 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.274310 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-secret-volume\") pod \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.274818 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47mnp\" (UniqueName: \"kubernetes.io/projected/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-kube-api-access-47mnp\") pod \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.275001 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-config-volume\") pod \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\" (UID: \"4053e9d2-dd8c-4a6f-94ec-3658edf315b2\") " Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.275669 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-config-volume" (OuterVolumeSpecName: "config-volume") pod "4053e9d2-dd8c-4a6f-94ec-3658edf315b2" (UID: "4053e9d2-dd8c-4a6f-94ec-3658edf315b2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.281554 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-kube-api-access-47mnp" (OuterVolumeSpecName: "kube-api-access-47mnp") pod "4053e9d2-dd8c-4a6f-94ec-3658edf315b2" (UID: "4053e9d2-dd8c-4a6f-94ec-3658edf315b2"). InnerVolumeSpecName "kube-api-access-47mnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.281659 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4053e9d2-dd8c-4a6f-94ec-3658edf315b2" (UID: "4053e9d2-dd8c-4a6f-94ec-3658edf315b2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.377798 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47mnp\" (UniqueName: \"kubernetes.io/projected/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-kube-api-access-47mnp\") on node \"crc\" DevicePath \"\"" Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.377833 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.377842 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4053e9d2-dd8c-4a6f-94ec-3658edf315b2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.870722 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" event={"ID":"4053e9d2-dd8c-4a6f-94ec-3658edf315b2","Type":"ContainerDied","Data":"f1ff89ced53c3d04a784b8180c1d738bb863b38a2a08f5da58dcb4606f95da4f"} Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.870771 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ff89ced53c3d04a784b8180c1d738bb863b38a2a08f5da58dcb4606f95da4f" Jan 23 19:30:03 crc kubenswrapper[4760]: I0123 19:30:03.870817 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-927nk" Jan 23 19:30:04 crc kubenswrapper[4760]: I0123 19:30:04.250958 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8"] Jan 23 19:30:04 crc kubenswrapper[4760]: I0123 19:30:04.259224 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486565-bzbp8"] Jan 23 19:30:04 crc kubenswrapper[4760]: I0123 19:30:04.656533 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:30:04 crc kubenswrapper[4760]: I0123 19:30:04.656838 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:30:04 crc kubenswrapper[4760]: I0123 19:30:04.701452 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:30:04 crc kubenswrapper[4760]: I0123 19:30:04.922355 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:30:04 crc kubenswrapper[4760]: I0123 19:30:04.975995 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bc2c"] Jan 23 19:30:05 crc kubenswrapper[4760]: I0123 19:30:05.607977 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6388456d-1bcc-4b94-844e-1a3f97272d66" path="/var/lib/kubelet/pods/6388456d-1bcc-4b94-844e-1a3f97272d66/volumes" Jan 23 19:30:06 crc kubenswrapper[4760]: I0123 19:30:06.896023 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4bc2c" podUID="917c46f6-bba2-43b4-a8d4-59b9ac3bd623" containerName="registry-server" containerID="cri-o://fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5" gracePeriod=2 Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.368917 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.459718 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-catalog-content\") pod \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.459783 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-utilities\") pod \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.459939 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ghbw\" (UniqueName: \"kubernetes.io/projected/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-kube-api-access-4ghbw\") pod \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\" (UID: \"917c46f6-bba2-43b4-a8d4-59b9ac3bd623\") " Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.460987 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-utilities" (OuterVolumeSpecName: "utilities") pod "917c46f6-bba2-43b4-a8d4-59b9ac3bd623" (UID: "917c46f6-bba2-43b4-a8d4-59b9ac3bd623"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.467848 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-kube-api-access-4ghbw" (OuterVolumeSpecName: "kube-api-access-4ghbw") pod "917c46f6-bba2-43b4-a8d4-59b9ac3bd623" (UID: "917c46f6-bba2-43b4-a8d4-59b9ac3bd623"). InnerVolumeSpecName "kube-api-access-4ghbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.488730 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "917c46f6-bba2-43b4-a8d4-59b9ac3bd623" (UID: "917c46f6-bba2-43b4-a8d4-59b9ac3bd623"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.561993 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ghbw\" (UniqueName: \"kubernetes.io/projected/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-kube-api-access-4ghbw\") on node \"crc\" DevicePath \"\"" Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.562032 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.562042 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917c46f6-bba2-43b4-a8d4-59b9ac3bd623-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.908883 4760 generic.go:334] "Generic (PLEG): container finished" podID="917c46f6-bba2-43b4-a8d4-59b9ac3bd623" containerID="fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5" exitCode=0 Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.908961 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bc2c" event={"ID":"917c46f6-bba2-43b4-a8d4-59b9ac3bd623","Type":"ContainerDied","Data":"fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5"} Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.909266 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bc2c" event={"ID":"917c46f6-bba2-43b4-a8d4-59b9ac3bd623","Type":"ContainerDied","Data":"1a0ee8836995dae610b7b43cb4a4b07b18a74e5e16e0292ce3a226c5c595f596"} Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.909292 4760 scope.go:117] "RemoveContainer" containerID="fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5" Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.909007 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bc2c" Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.941888 4760 scope.go:117] "RemoveContainer" containerID="865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a" Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.942006 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bc2c"] Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.952737 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bc2c"] Jan 23 19:30:07 crc kubenswrapper[4760]: I0123 19:30:07.986057 4760 scope.go:117] "RemoveContainer" containerID="ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5" Jan 23 19:30:08 crc kubenswrapper[4760]: I0123 19:30:08.017790 4760 scope.go:117] "RemoveContainer" containerID="fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5" Jan 23 19:30:08 crc kubenswrapper[4760]: E0123 19:30:08.018200 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5\": container with ID starting with fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5 not found: ID does not exist" containerID="fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5" Jan 23 19:30:08 crc kubenswrapper[4760]: I0123 19:30:08.018244 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5"} err="failed to get container status \"fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5\": rpc error: code = NotFound desc = could not find container \"fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5\": container with ID starting with fab854a52b383b7e78cc321936607ba5352c19cfc6bd34c6dc6b515ffd5471a5 not found: ID does not exist" Jan 23 19:30:08 crc kubenswrapper[4760]: I0123 19:30:08.018271 4760 scope.go:117] "RemoveContainer" containerID="865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a" Jan 23 19:30:08 crc kubenswrapper[4760]: E0123 19:30:08.018753 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a\": container with ID starting with 865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a not found: ID does not exist" containerID="865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a" Jan 23 19:30:08 crc kubenswrapper[4760]: I0123 19:30:08.018840 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a"} err="failed to get container status \"865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a\": rpc error: code = NotFound desc = could not find container \"865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a\": container with ID starting with 865fb332b1684d9908080245191e4c6f3d3a6a5bc7eb5b0ed94376c25807e01a not found: ID does not exist" Jan 23 19:30:08 crc kubenswrapper[4760]: I0123 19:30:08.018923 4760 scope.go:117] "RemoveContainer" containerID="ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5" Jan 23 19:30:08 crc kubenswrapper[4760]: E0123 19:30:08.019337 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5\": container with ID starting with ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5 not found: ID does not exist" containerID="ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5" Jan 23 19:30:08 crc kubenswrapper[4760]: I0123 19:30:08.019400 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5"} err="failed to get container status \"ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5\": rpc error: code = NotFound desc = could not find container \"ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5\": container with ID starting with ec6360bd190b3e29ed27566b10a7fa4cf9be5e83e0f503c08f89012a10e3b5a5 not found: ID does not exist" Jan 23 19:30:09 crc kubenswrapper[4760]: I0123 19:30:09.606616 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="917c46f6-bba2-43b4-a8d4-59b9ac3bd623" path="/var/lib/kubelet/pods/917c46f6-bba2-43b4-a8d4-59b9ac3bd623/volumes" Jan 23 19:30:09 crc kubenswrapper[4760]: I0123 19:30:09.760727 4760 scope.go:117] "RemoveContainer" containerID="9926087643e73ef9af9cc2d11012022186a3f100f28a730aaef3a7f67a8f259b" Jan 23 19:30:11 crc kubenswrapper[4760]: I0123 19:30:11.597485 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:30:11 crc kubenswrapper[4760]: E0123 19:30:11.598044 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:30:25 crc kubenswrapper[4760]: I0123 19:30:25.600333 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:30:25 crc kubenswrapper[4760]: E0123 19:30:25.601035 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:30:36 crc kubenswrapper[4760]: I0123 19:30:36.595958 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:30:36 crc kubenswrapper[4760]: E0123 19:30:36.596850 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:30:47 crc kubenswrapper[4760]: I0123 19:30:47.603781 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:30:47 crc kubenswrapper[4760]: E0123 19:30:47.605663 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d" Jan 23 19:30:58 crc kubenswrapper[4760]: I0123 19:30:58.596789 4760 scope.go:117] "RemoveContainer" containerID="e56005e7d737a798cdc15e093f8e44513a13b2762db6efc7d49cab64ecd7a4a4" Jan 23 19:30:58 crc kubenswrapper[4760]: E0123 19:30:58.597435 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xsk7_openshift-machine-config-operator(20652c61-310f-464d-ae66-dfc025a16b8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xsk7" podUID="20652c61-310f-464d-ae66-dfc025a16b8d"